CN1765123A - Image processing apparatus, image processing method and program - Google Patents

Image processing apparatus, image processing method and program Download PDF

Info

Publication number
CN1765123A
CN1765123A CNA2005800001376A CN200580000137A CN1765123A CN 1765123 A CN1765123 A CN 1765123A CN A2005800001376 A CNA2005800001376 A CN A2005800001376A CN 200580000137 A CN200580000137 A CN 200580000137A CN 1765123 A CN1765123 A CN 1765123A
Authority
CN
China
Prior art keywords
image
motion
pixel
motion vector
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2005800001376A
Other languages
Chinese (zh)
Other versions
CN100423557C (en
Inventor
近藤哲二郎
金丸昌宪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of CN1765123A publication Critical patent/CN1765123A/en
Application granted granted Critical
Publication of CN100423557C publication Critical patent/CN100423557C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

A motion vector determining part (30a) determines a motion vector by use of images each comprising a plurality of pixels obtained by an image sensor having a time quadrature effect. A time resolution creating part (90) produces an image of a high time resolution by use of the determined motion vector and the images each comprising the plurality of pixels. A motion blurring reduction image producing part (40) obtains the pixel values of the pixels of a motion object in images as the values obtained by shifting and simultaneously integrating in the time direction the pixel values of pixels where there has occurred no motion blurring that corresponds to the motion object. The motion blurring reduction image producing part (40) then uses the detected motion vector to produce a motion blurring reduction image in which the motion blurring of the motion object has been reduced.

Description

Image processing apparatus, image processing method and program
Technical field
The present invention relates to a kind of device, method and program that is used to handle image.More particularly, they by utilize form by a plurality of pixel and come motion vector is detected by the image that imageing sensor obtained with time integral effect, and by utilizing the motion vector detected and the image of being made up of a plurality of pixel, they have produced has than this image image of high time resolution more.They also can alleviate existing motion blur in the motion object in image by utilizing the motion vector detected.
Background technology
According to such as the such conventional frame of pull system under the 2-3 in telecine conversion speed converting system, carry out a series of image of a frame of film that makes periodically and repeat twice such processing and make the such processing of its triplicate, frame speed is converted to another frame speed.Be disclosed in the Japanese patent application of 2002-199349 as publication number, by the relation between corresponding student's picture signal of frame speed before for each classification of the characteristic relevant, learning to have teacher's picture signal of conversion back frame speed and having conversion with student's picture signal of frame speed before having conversion, with by utilizing because the predictive coefficient that this study was obtained converts the picture signal with frame speed after the conversion to having the picture signal of frame speed before the conversion, can obtain to have realized the signal of the high definition temporal resolution image of proper motion.
Simultaneously, in order to carry out and the different image processing of this frame speed conversion, these different images need motion vector in handling in some cases, if so that motion vector is detected and use this motion vector for each image processing, it needs complicated structure so.In addition, if employed motion vector in the image processing is not carried out suitable detection, this image processing can not obtain such as the so desired image of the high-definition image of having realized proper motion so.
Summary of the invention
In view of above-mentioned, for carries out image processing and obtain desired image effectively by this image processing by utilizing motion vector, involved in the present invention being used for comprises the device that image is handled: device for detecting motion vector, be used for by utilize form by a plurality of pixel and come motion vector is detected by the image that imageing sensor obtained with time integral effect; The temporal resolution creation apparatus is used for having than the image of being made up of a plurality of pixel image of high time resolution more by utilizing motion vector that device for detecting motion vector detects and being produced by the image that a plurality of pixel is formed; And motion blurring reduction image (motion-blurring-mitigated image) generation device, at the pixel value of the motion object pixels of supposition in the image is when on time orientation the pixel value of following each pixel being carried out value that integration obtained, wherein said each pixel do not exist when object motion and the corresponding motion blur of motion object therein, and this motion blurring reduction image producing device is used for producing by the motion vector that utilizes device for detecting motion vector to detect and has wherein alleviated the fuzzy motion blurring reduction image of motion motion of objects.
Involved in the present invention being used for comprises the method that image is handled: the motion vector detection step, be used for by utilize by have the time integral effect imageing sensor obtained and come motion vector is detected by the image that a plurality of pixel is formed; The temporal resolution foundation step is used for having than the image of being made up of a plurality of pixel image of high time resolution more by utilizing the motion vector that detected in the motion vector detection step and being produced by the image that a plurality of pixel is formed; And motion blurring reduction image producing step, at the pixel value of the motion object pixels of supposition in the image is when on time orientation the pixel value of following each pixel being carried out value that integration obtained, wherein said each pixel does not wherein exist when object motion and the corresponding motion blur of motion object, and this step is used for wherein having alleviated the fuzzy motion blurring reduction image of motion motion of objects by utilizing the motion vector that is detected in the motion vector detection step to produce.
Program involved in the present invention can make computer carry out following step: the motion vector detection step, be used for by utilize by have the time integral effect imageing sensor obtained and come motion vector is detected by the image that a plurality of pixel is formed; The temporal resolution foundation step is used for having than the image of being made up of a plurality of pixel image of high time resolution more by utilizing the motion vector that detected in the motion vector detection step and being produced by the image that a plurality of pixel is formed; And motion blurring reduction image producing step, at the pixel value of the motion object pixels of supposition in the image is when on time orientation the pixel value of following each pixel being carried out value that integration obtained, wherein said each pixel does not wherein exist when object motion and the corresponding motion blur of motion object, and this step is used for wherein having alleviated the fuzzy motion blurring reduction image of motion motion of objects by utilizing the motion vector that is detected in the motion vector detection step to produce.
In the present invention, by utilize form by a plurality of pixel and come motion vector is detected by the image that imageing sensor obtained with time integral effect so that can the motion vector of image with high time resolution be detected.By utilizing motion vector that is detected and the image of forming by a plurality of pixel, can produce image with high time resolution.In addition, suppose that motion object pixels value is to carry out the value that integration obtained by the pixel value to following each pixel on time orientation, wherein said each pixel does not wherein exist when object motion and the corresponding motion blur of motion object, can alleviate existing motion blur in the motion object by utilizing the motion vector of proofreading and correct according to the time for exposure.In addition, has the more image of high time resolution by making motion blurring reduction image that motion blur alleviated as the image of forming by a plurality of pixel, can producing.
According to the present invention, by utilize form by a plurality of pixel and come motion vector is detected by the image that imageing sensor obtained with time integral effect, so that the image that can use the motion vector that detected and be made up of a plurality of pixel is to produce such image, this image has the temporal resolution higher than the image of being made up of a plurality of pixel.In addition, suppose that the motion object pixels value in the image is to carry out the value that integration obtained by the pixel value to following each pixel on time orientation, wherein said each pixel does not wherein exist when object motion and the corresponding motion blur of motion object, can alleviate existing motion blur in the motion object according to the motion vector that is detected.Needn't have in the treatment of picture of high time resolution and alleviate in the processing of motion blur in generation thus and carry out independent motion vector detection respectively, thereby make and utilize simple structure can produce image and alleviate motion blur with high time resolution.
In addition, produce image, so that can suppress to have the motion blur of the image of high time resolution with high time resolution by the motion blurring reduction image of utilizing motion blur to alleviate.
In addition, by utilize its each all form by a plurality of pixel and come motion vector is detected by a plurality of images that imageing sensor obtained, and in assigning process, use the motion vector that this detected, produce motion vector so that can be image, thereby suitably produce image with high time resolution with high time resolution.In addition, the motion vector that is detected is proofreaied and correct according to the time for exposure, even, also can suitably alleviate motion blur so that carry out shutter operation etc.
In addition, by the motion vector that utilizes device for detecting motion vector to detect, can determine the motion vector of the object pixel in the image that will produce, from the image that imageing sensor obtained, extract with the corresponding a plurality of pixels of object pixel as classification tap (class tap), and determine and the corresponding classification of object pixel according to the pixel value of this classification tap.In addition, suppose and have and first image of the corresponding temporal resolution of image that imageing sensor is obtained and have than first image more between second image of high time resolution, according to determined classification be used for from a plurality of pixels of corresponding first image of the object pixel of second image dope the predictive coefficient of object pixel, the corresponding a plurality of pixels of object pixel in the image that from the image that imageing sensor obtained, extracts and will produce, and pass through predictive coefficient and the combination of prediction tapped execution one-dimensional linear are produced and the corresponding predicted value of this object pixel, thereby produce image with high time resolution.This can obtain wherein to have realized the more high-definition image of high time resolution that has of proper motion.
Description of drawings
Fig. 1 has provided the block diagram of the applied system of the present invention;
Fig. 2 has provided the schematic diagram of the captured image of imageing sensor;
Fig. 3 A and 3B have provided the schematic diagram of captured image;
Provided to Fig. 4 illustrative the schematic diagram of the division operation of pixel value on time orientation;
Fig. 5 has provided the block diagram that is used for device that image is handled;
Fig. 6 has provided the block diagram that motion vector detection section is divided;
Fig. 7 has provided the block diagram of motion blurring reduction image producing part;
Fig. 8 has provided the block diagram of area identification part;
Fig. 9 has provided the schematic diagram of the view data of reading from video memory;
Figure 10 has provided the schematic diagram that region decision is handled;
Figure 11 has provided the block diagram of mixing ratio calculating section;
Figure 12 has provided the schematic diagram of theoretical mixture ratio;
Figure 13 has provided the block diagram of foreground/background separation part;
Figure 14 has provided motion blur and has regulated block diagram partly;
Figure 15 has provided the schematic diagram of regulating processing unit;
Figure 16 has provided the position view of the pixel value that motion blur alleviated;
Figure 17 has provided another structural representation that is used for device that image is handled;
Figure 18 has provided the operational flowchart that is used for device that image is handled;
Figure 19 has provided the flow chart of the generation processing of motion blurring reduction image;
Figure 20 has provided the block diagram of another structure of motion blurring reduction image producing part;
Figure 21 has provided the schematic diagram of processing region;
Each of Figure 22 A and 22B has all provided the schematic diagram that is used for a example that processing region is provided with;
Provided to Figure 23 illustrative the schematic diagram of the mixing of real world variable on time orientation in the processing region;
Each of Figure 24 A-24C has all provided the move schematic diagram of such example of object;
Each of Figure 25 A-25F has all provided the schematic diagram of the display image of having expanded that object is followed the trail of;
Figure 26 has provided the block diagram of the another structure that is used for device that image is handled;
Figure 27 has provided the block diagram of the structure of spatial resolution establishment part;
Figure 28 has provided the block diagram of facility for study;
Figure 29 has provided spatial resolution has been created operation (first half) flow chart of handling under the situation about making up;
Figure 30 has provided and spatial resolution has been created operation (latter half) flow chart of handling under the situation about making up;
Figure 31 has provided the another block diagram that is used for device that image is handled;
Figure 32 has provided the block diagram of another structure of motion vector detection section branch;
Provided to Figure 33 illustrative the motion vector allocation process;
Figure 34 has provided temporal resolution and has created structural representation partly;
Provided to Figure 35 A and 35B illustrative each operation of temporal mode value determining section;
Figure 36 has provided the schematic diagram of classification pixel groups;
Having provided to Figure 37 illustrative the classification value determines to handle;
Figure 38 has provided temporal resolution and has created the flow chart of handling;
Figure 39 has provided the definite flow chart of handling in tapped centre position;
Figure 40 has provided the block diagram of facility for study;
Figure 41 has provided the process chart that is used to learn predictive coefficient;
Figure 42 has provided and temporal resolution has been created the operational flowchart of handling under the situation about making up; And
Figure 43 has provided the operational flowchart that can carry out under the regioselective situation.
Embodiment
Below with reference to accompanying drawing one embodiment of the present of invention are described.Fig. 1 has provided the block diagram of the applied system of the present invention.Taken by 10 pairs of real world of imageing sensor that video camera etc. is constituted, it is the charge-coupled device (CCD) area sensor or the CMOS area sensor of solid-state image sensing apparatus that described imageing sensor 10 is equipped with.For example, as shown in Figure 2, when and the corresponding motion object of prospect OBf imageing sensor 10 and and the corresponding object OBb of background between when the direction of arrow " A " is moved, 10 pairs of imageing sensors are taken with the corresponding object OBb of background and with the corresponding motion object of prospect OBf.
This imageing sensor 10 is that each a plurality of detecting element that all have a time integral effect constitutes by it, and therefore in the time for exposure each detecting element is carried out integration according to the electric charge that incident light produced.That is to say that imageing sensor 10 is carried out opto-electronic conversion in the process that incident light is converted to electric charge, so that be that unit gathers it with a frame period.According to the quantity of electric charge that is gathered, can produce pixel data, and the view data DVa that after this uses this pixel data to have expectation frame speed, and these data are offered be used for device 20 that image is handled with generation.Imageing sensor 10 further has shutter function, if so that produced view data DVa by regulate the time for exposure according to shutter speed, can provide time for exposure Parameter H E, this parametric representation time for exposure to the device 20 that is used to handle image so.This time for exposure Parameter H E represents that with for example " 0 " to the value of " 1.0 " shutter in the frame period opens the time, and this value is set to 1.0 when not using shutter function, and when aperture time be the frame period 1/2 the time this value be set to 0.5.
The device 20 that is used for handling image extracts owing to the time integral effect that applies on imageing sensor 10 is buried in the important information of view data DVa, and utilize this important information with alleviate owing to the corresponding motion object of sport foreground OBf on the motion blur that caused of the time integral effect that produced.Should be noted in the discussion above that to the device 20 that is used to handle image provides regional selection information HA, with the image-region that is used to select the motion blur in it to alleviate.
Provided to Fig. 3 illustrative the schematic diagram of the given photographic images of view data DVa.Fig. 3 A has provided by to taking the image that is obtained with the corresponding motion object of sport foreground and with the corresponding object OBb of static background.Suppose transverse movement on the direction of arrow " A " here, with the corresponding object OBf of prospect.
Fig. 3 B has provided the image of the line L shown in the dotted line in Fig. 3 A and the relation between the time.For example nearly nine pixels and its have moved in a time for exposure nearly under the situation of five pixels in length that motion object OBf is moved along line L, when the time for exposure finishes, when the frame period begins its front end that is positioned at location of pixels P21 with and the rear end that is positioned at location of pixels P13 move to location of pixels P25 and P17 respectively.In addition, if do not use shutter function, an interior time for exposure of frame equates that with a frame period so that when next frame period begins, its front-end and back-end lay respectively at location of pixels P26 and P18 so.For the purpose of simple declaration, suppose unless otherwise prescribed otherwise do not use shutter function.
Therefore, in the frame period of line L, be positioned at the part before the location of pixels P12 and the part that is positioned at after the location of pixels P26 has constituted the background area that only becomes branch to form by background.In addition, the part between location of pixels P17-P21 has constituted the preceding scenic spot that only becomes branch to form by prospect.All constituted the mixed zone that is mixed with prospect composition and background composition separately in part between the location of pixels P13-P16 and the part between location of pixels P22-P25.The not covering background area that the mixed zone is divided into as time goes by and has covered the covering background area of background composition and presented the background composition as time goes by by prospect.Should be noted in the discussion above that in Fig. 3 B, is to cover background area being positioned at mixed zone on the foreground object front on the direction that foreground object is advanced, and the mixed zone that is positioned on distolateral thereafter is not cover background area.Therefore, view data DVa includes such image, and promptly this image comprises that a preceding scenic spot, a background area, cover background area or and do not cover background area.
Should be noted that, a frame is very short in time, so that be rigidity with the corresponding motion object of prospect OBf and supposition with the motion of identical speed under, as shown in Figure 4, pixel value within time for exposure is subjected to the division on the time orientation, so that it is divided into the equal time interval divided by virtual division number.
According to virtual division number being set with the amount of exercise v of the corresponding motion object of prospect in a frame period.For example, if the amount of exercise v in frame period is five pixels as mentioned above, so according to amount of exercise v and virtual division number is set to " 5 ", so that a frame period is divided into five equal time intervals.
In addition, in a frame period, suppose that the pixel value at the location of pixels Px that is obtained when taking with the corresponding object OBb of background is Bx, and supposition is that F09 (front) is to F01 (rear end side) at the pixel value to pixel corresponding with prospect and that obtained when the object OBf that moves that line L has nine length in pixels carries out static the shooting.
In this case, for example, provide the pixel value DP15 of location of pixels P15 by equation 1:
DP15=B15/v+B15/v+F01/v+F02/v+F03/v ...(1)
This location of pixels P15 include two divide virtual times (frame period/v) background composition and three prospect compositions of dividing virtual times, be 2/5 thereby make the mixing ratio α of background composition.Similarly, for example, location of pixels P22 includes background composition and four the prospect compositions of dividing virtual time of dividing virtual time, is 1/5 thereby make mixing ratio α.
Because supposition and the corresponding motion of prospect to as if rigidity and with the motion of identical speed so as can next frame to the right five pixels come the image of display foreground, therefore for example the first prospect composition (F01/v) of dividing the location of pixels P13 in the virtual time to divide the prospect composition of the prospect composition of the location of pixels P16 in the virtual time and the location of pixels P17 in the 5th division virtual time identical with the prospect composition, the 4th that the second prospect composition, the 3rd of dividing the location of pixels P14 in the virtual time is divided the location of pixels P15 in the virtual time respectively.It is identical with the situation of prospect composition (F01/v) that the first prospect composition (F02/v) of dividing the location of pixels P14 in the virtual time is divided the prospect composition (F09/v) of the location of pixels P21 in the virtual time to first.
Therefore, can provide the pixel value DP of each location of pixels by using the mixing ratio α shown in the equation 2.In equation 2, the summation of " FE " expression prospect composition.
DP=α·B+FE ...(2)
Because the prospect composition moves like this, therefore in a frame period with the addition each other of different prospect compositions include motion blur so that make with the corresponding preceding scenic spot of motion object.Therefore, be used for handling the device 20 extraction mixing ratio α of image as the important information that is buried in view data DVa, and use this mixing ratio α to produce such view data DVout, the motion blur with the corresponding motion object of prospect OBf in this view data DVout alleviates.
Fig. 5 has provided the block diagram that is used for device 20 that image is handled.The view data DVa that offers device 20 is offered motion vector detection section successively divide 30 and motion blurring reduction image producing part 40.In addition, the zone is selected information HA and time for exposure Parameter H E offer motion vector detection section and divided 30.In addition, the described view data DVm that is read from memory 55 is subsequently offered motion vector detection section and divide 30.Motion vector detection section divides 30 to select information HA sequentially to extract according to the zone will to be subjected to the processing region that motion blur mitigation is handled.It also by the view data in the processing region that utilizes view data DVa or view data DVm come to processing region in the corresponding motion vector MVC of motion object detect, and provide it to motion blurring reduction image producing part 40.For example, it to any one of at least the first and second images that occur continuously in time in the corresponding object pixel in position of motion object be provided with, with by utilizing first and second images to come to detecting with the corresponding motion vector of this object pixel.In addition, it has produced the processing region information HZ of expression processing region and this information has been offered motion blurring reduction image producing part 40.In addition, it upgrades regional selection information HA according to motion of objects in the prospect, so that processing region moves along with the motion motion of objects.
Motion blurring reduction image producing part 40 is specified a zone or is calculated mixing ratio according to action vector MV, processing region information HZ and view data Dva, and uses the mixing ratio of being calculated that prospect composition and background composition are separated each other.In addition, it is carried out motion blur to the image of the prospect composition that separated and regulates to produce prospect composition view data DBf, and this prospect composition view data DBf is the view data of motion blur mitigation object images.In addition, it will be incorporated into based on the preceding scenic spot of the motion blur mitigation image sets of prospect composition view data DBf based in the background image of background composition view data DBb to produce the view data DVout of motion blurring reduction image.This view data DVout is offered memory 55 and unshowned image display device.In this case, on the corresponding space-time of the motion vector MVC position that the preceding scenic spot image sets that is the motion blur mitigation object images can be incorporated into and be detected, with to a position output movement motion of objects blur reduction image along the object trajectory of moving.That is to say, when when utilizing at least the first and second images that occur continuously in time to come motion vector detected, motion motion of objects blur reduction image sets is incorporated on the position of a corresponding object pixel of the motion vector with being detected in the image or be combined to another image in the corresponding position of an object pixel on.
Fig. 6 has provided motion vector detection section and has divided 30 block diagram.Select information HA to offer processing region in the zone part 31 is set.In addition, view data DVa and the view data DVm that read from memory 55 are offered view data test section 32.In addition, time for exposure Parameter H E is offered motion vector correction portion 34.
Processing region is provided with part 31 and selects information HA sequentially to extract the processing region that will be subjected to the motion blur mitigation processing according to the zone, and will represent that the processing region information HZ of processing region offers test section 33 and motion blurring reduction image producing part 40.In addition, it utilizes the described subsequently motion vector MVO that is detected by test section 33 to upgrade regional selection information HA, thereby can come in the following manner the image-region that motion blur has alleviated is followed the trail of, described mode is to make it satisfy the motion motion of objects.
View data selects part 32 that view data DVa is offered test section 33, up to can be by till utilizing the view data DVm read from memory 55 come motion vector MV detected, and in the time can detecting motion vector by the view data DVm that utilization is read from memory 55, it will offer test section 33 from the view data DVm that memory 55 is read.
The for example processing region execution motion vector detection represented to processing region information HZ such as BMA, gradient method, phase correlation method, Pel recursive algorithm used in test section 33, and the motion vector MV that is detected is offered motion vector correction portion 34.Perhaps, detect the periphery of tracking point set in zone shown in the zone selection information HA the view data of the peripheral frame of test section 33 from a plurality of time orientations, for example detect zone (one or more), part 31 is set thereby calculate the motion vector MV at tracking point place and provide it to processing region with image feature amount identical with zone shown in the regional selection information HA.
The motion vector MV that test section 33 is exported includes and amount of exercise (standard) and the direction of motion (angle) information corresponding.This amount of exercise is meant such value, the change in location of this value representation and the corresponding image of motion object.For example, if with the corresponding motion object of prospect OBf in the frame after as a reference certain frame and then, moved the in the horizontal move-x and the move-y that moved in the vertical, can obtain its amount of exercise by equation 3 so.Also can obtain its direction of motion by equation 4.Only provide a pair of amount of exercise and the direction of motion to a processing region.
Figure A20058000013700161
The direction of motion=tan -1(move-y/move-x) ... (4)
Motion vector correction portion 34 utilizes time for exposure Parameter H E to come motion vector MV is proofreaied and correct.The motion vector MV that offers motion vector correction portion 34 is aforesaid interframe (inter-frame) motion vector.Yet, utilize in the frame (intra-frame) motion vector to come described motion blurring reduction image producing part 40 employed motion vectors are subsequently handled, if make, can not correctly carry out motion blur mitigation so and handle when making the time for exposure in the frame use the interframe movement vector in short-term than a frame period because use shutter function.Therefore, will offer motion blurring reduction image producing part 40 as motion vector MVC according to the ratio in a time for exposure and frame period motion vector MV that proofread and correct, that be the interframe movement vector.
Fig. 7 has provided the block diagram of motion blurring reduction image producing part 40.Area identification part 41 produces following information (being designated hereinafter simply as " area information ") AR and provides it to mixing ratio calculating section 42, foreground/background separation part 43 and motion blur regulates part 44, and in preceding scenic spot, background area and the mixed zone which wherein said information representation belong to according to each pixel in the processing region shown in the processing region information HZ in the shown image of view data DVa.
Mixing ratio calculating section 42 calculates the mixing ratio of the background composition the mixed zone according to view data DVa and from the area information AR that area identification part 41 provides, and the mixing ratio of being calculated is offered foreground/background separation part 43.
The mixing ratio α that area information AR that foreground/background separation part 43 is provided according to area identification part 41 and mixing ratio calculating section 42 are provided, view data DVa only is separated into by prospect becomes prospect composition view data DBe that branch forms and the background composition view data DBb that only becomes branch to form, and prospect composition view data DBe is offered motion blur regulate part 44 by background.
Motion blur is regulated part 44 and is determined the adjusting processing unit according to amount of exercise shown in the motion vector MVC and area information AR, and described adjusting processing unit represents to be included at least one pixel among the prospect composition view data DBe.Regulating processing unit is to be used for one group of pixel that will be subjected to the motion blur mitigation processing is carried out data designated.
Prospect component-part diagram picture, the motion vector detection section that motion blur is regulated part 44 to be provided according to foreground/background separation part 43 the 30 motion vector MVC that provide and its area information AR is provided and regulated processing unit and alleviate the motion blur that is included among the prospect composition view data DBe.It offers output 45 with the prospect composition view data DBf of this motion blur mitigation.
Fig. 8 has provided the block diagram of area identification part 41.Video memory 411 is the view data DVa that the unit storage is imported with the frame.If will processed frame #n, then video memory 411 storages come across frame #n-2, the frame #n-1 that comes across the previous frame of frame #n, the frame #n of two frames before the frame #n, the frame #n+1 that comes across a frame after the frame #n and the frame #n+2 that comes across latter two frame of frame #n in time.
Static/motion determination part 412 from video memory 411, read with the specified regional identical zone of the processing region information HZ of frame #n in the view data of frame #n-2, #n-1, #n+1 and #n+2, and calculate interframe absolute difference between the view data item of being read.Whether it is higher than preset threshold value Th according to this interframe absolute difference is judged it is in motion parts or the stationary part which, and will represent that the static/motion determination information SM of this judged result offers region decision part 413.
Fig. 9 has provided the view data that is read from video memory 411.Should be noted that Fig. 9 has provided such a case, promptly read the view data of the location of pixels P01-P37 of the delegation in the zone shown in the processing region information HZ.
Static/motion determination part 412 obtains the interframe absolute difference of each pixel of two successive frames, whether absolute difference is higher than preset threshold value Th between judgment frame, if and the interframe absolute difference is higher than threshold value Th, judge that so it is " motion ", if perhaps be not higher than threshold value Th, judge that so it is " static ".
Region decision part 413 is handled by utilizing the judged result that obtains in static/motion determination part 412 places to carry out region decision shown in Figure 10, and each pixel in the zone that is identified with judgment processing area information HZ is to belong to the quiescent centre, cover background area, do not cover in background area and the motor area which.
For example, at first,, judge that this pixel is the pixel of quiescent centre for as the result of static/motion determination of frame #n-1 and #n is judged as static pixel.In addition, for as the result of static/motion determination of frame #n and #n+1 is confirmed as static pixel, judge that also this pixel is the pixel of quiescent centre.
Next, but for static as the pixel that the result of static/motion determination of frame #n-1 and #n is judged as motion, judge that this pixel is the pixel that covers background area as the result of static/motion determination of frame #n-2 and #n-1 is judged as.In addition, for as the result of static/motion determination of frame #n and #n+1 is judged as motion but as the result of static/motion determination of frame #n+1 and #n+2 is judged as static pixel, judge that this pixel is the pixel that does not cover background area.
After this, for as to static/motion determination of frame #n-1 and #n and the result of static/motion determination of frame #n and #n+1 is confirmed as is the pixel of motion, judge that this pixel is the pixel of motor area.
Should be noted that and have such certain situation, promptly, even background becomes and not to be comprised among it, be arranged on the motor area side that covers background area or the pixel that is arranged on the motor area side that does not cover background area also is defined as respectively being covering background area or not covering background area.For example, as to the result of static/motion determination of frame #n-2 and #n-1 and the location of pixels P21 in the process decision chart 9 is static, but as to the result of static/motion determination of frame #n-1 and #n and judge that this location of pixels P21 moves, therefore even and the background composition is not comprised in wherein, also decidable its be to cover background area.As to the result of static/motion determination of frame #n and #n+1 and judge that another location of pixels P17 moves, but as to the result of static/motion determination of frame #n+1 and #n+2 and judge that this location of pixels P17 is static, therefore even and the background composition is not comprised in wherein, also decidable its be not cover background area.Therefore, will be arranged in each pixel on the motor area side that covers background area and be arranged in the pixel that each pixel correction on the motor area side that does not cover background area becomes the amount of exercise zone, can carry out region decision exactly each pixel.Judge by such execution area, can produce each pixel of expression and belong to the quiescent centre, cover background area, do not cover which the area information AR in background area and the motor area, and provide it to mixing ratio calculating section 42, foreground/background separation part 43 and motion blur and regulate part 44.
Should be noted that, area identification part 41 can adopt the area information that do not cover background area and the area information that covers background area logic and, thereby produce the area information of mixed zone, so that make area information AR can represent which in quiescent centre, mixed zone and the motor area be each pixel belong to.
Figure 11 has provided the block diagram of mixing ratio calculating section 42.Estimation mixing ratio processing section 421 is calculated the estimation mixing ratio α c of each pixel to covering the background area executable operations, and the estimation mixing ratio α c that is calculated is offered mixing ratio determining section 423 by according to view data DVa.Another estimation mixing ratio processing section 422 is not by calculating the estimation mixing ratio α u of each pixel according to view data DVa to covering the background area executable operations, and the estimation mixing ratio α u that is calculated is offered mixing ratio determining section 423.
The area information AR that mixing ratio determining section 423 is provided according to estimation mixing ratio 421,422 estimation mixing ratio α c that provide respectively in processing section and α u and area identification part 41 is provided with the mixing ratio α of background composition.If object pixel belongs to the motor area, mixing ratio determining section 423 mixing ratio α are set to 0 (α=0) so.On the other hand, if object pixel belongs to the quiescent centre, mixing ratio is set to 1 (α=1) so.If object pixel belongs to the covering background area, estimate that so the estimation mixing ratio α c that mixing ratio processing section 421 is provided is set to mixing ratio α; And if object pixel belongs to and do not cover background area, estimate that so the estimation mixing ratio α u that mixing ratio processing section 422 is provided is set to mixing ratio α.The mixing ratio α that is provided with is like this offered foreground/background separation part 43.
Here, if the frame period is very short, and therefore supposition and the corresponding motion of prospect to as if rigidity and in this frame period with identical speed motion, the mixing ratio α of pixel that belongs to the mixed zone so is according to the variation of location of pixels and variation linearly.In this case, as shown in figure 12, the gradient θ of the theoretical mixture ratio α in the mixed zone can be represented as the momental inverse in frame period with the corresponding motion object of prospect.That is to say that mixing ratio α has value " 1 " in the quiescent centre, in the motor area, have value " 0 ", and in the mixed zone, to the scope of " 1 ", change in " 0 ".
The pixel value of location of pixels P24 in frame #n-1 is under the supposition of B24, can be represented the pixel value DP24 of the location of pixels P24 in the covering background area shown in Figure 9 by following equation 5:
DP24=3B24/v+F08/v+F09/v
= 3 / v · B 24 + Σ i = 08 09 F i / v . . . ( 5 )
This pixel value DP24 includes the background composition of 3/v, is that " 5 " mixing ratio α (v=5) time is 3/5 (α=3/5) with convenient amount of exercise v.
That is to say, can represent to cover the pixel value Dgc of the location of pixels Pg in the background area by following formula 6.Should be noted in the discussion above that the pixel value of the location of pixels Pg among " Bg " expression frame #n-1, and the summation of the prospect composition at Pg place, " FEg " remarked pixel position.
Dgc=αc·Bg+FEg ...(6)
In addition, be that the value of Fg/v of Fg and this pixel position is all mutually the same if supposition has pixel value among the frame #n+1 of pixel position of pixel value Dgc, FEg=(1-α c) Fg then.That is to say, equation 6 can be become following equation 7:
Dgc=αc·Bg+(1-αc)Fg ...(7)
Equation 7 can be become following equation 8:
αc=(Dgc-Fg)/(Bg-Fg) ...(8)
In equation 8, Dgc, Bg and Fg are known, so that estimation mixing ratio processing section 421 can obtain to cover the estimation mixing ratio α c of the pixel in the background area by the pixel value that utilizes frame #n-1, #n and #n+1.
In addition, similar with the situation that covers background area with regard to not covering background area, if the pixel value that supposition does not cover in the background area is DPu, can obtain following equation 9 so:
αu=(Dgu-Bg)/(Fg-Bg) ...(9)
In equation 9, Dgu, Bg and Fg are known, so that estimation mixing ratio processing section 422 can obtain not cover the estimation mixing ratio α u of the pixel in the background area by the pixel value that utilizes frame #n-1, #n and #n+1.
If area information AR represents the quiescent centre, mixing ratio determining section 423 mixing ratio α are set to 1 (α=1) so, and if its expression motor area, this ratio is set to 0 (α=0) so, exports this ratio then.In addition, if area information AR represents to cover background area or do not cover background area, output is estimated the estimation mixing ratio α c that mixing ratio processing section 421 is calculated or is estimated that the estimation mixing ratio α u that is calculated mixing ratio processing section 422 is as mixing ratio α so respectively.
Figure 13 has provided the block diagram of foreground/background separation part 43.Offer separating part 431, switch sections 432 and another switch sections 433 with offering the view data DVa of foreground/background separation part 43 and area information AR that area identification part 41 is provided.The mixing ratio α that mixing ratio calculating section 42 is provided offers separating part 431.
According to area information AR, separating part 431 is isolated the data that cover background area and do not cover the pixel in the background area from view data DVa.According to data of being separated and mixing ratio α, it makes the foreground object that produces motion and is in static background composition and is separated each other, will being that the prospect composition of foreground object component offers composite part 434, and the background composition be offered another composite part 435.
For example, in the frame #n of Fig. 9, location of pixels P22-P25 belongs to the covering background area, if and this location of pixels P22-P25 has α 22-α 25 respectively, the pixel value of location of pixels P22 in frame #n-1 is under the supposition of " B22j " so, can be come the pixel value DP22 of remarked pixel position P22 by following formula 10:
DP22=B22/v+F06/v+F07/v+F08/v+F09/v
=α22·B22j+F06/v+F07/v+F08/v+F09/v ...(10)
The prospect composition FE22 that can represent the location of pixels P22 among the frame #n by following formula 11:
FE22=F06/v+F07/v+F08/v+F09/v
=DP22-α22·B22j ...(11)
That is to say,, utilize following formula 12 can obtain the prospect composition FEgc of the location of pixels Pg in the covering background area among the frame #n so if the pixel value of the location of pixels Pg among the supposition frame #n-1 is " Bgj ":
FEgc=DPg-αc·Bgj ...(12)
In addition, similar with the situation that covers the prospect composition FEgc in the background area, also can obtain not cover the prospect composition FEgu in the background area.
For example, in frame #n,, can represent not cover the pixel value DP16 of the location of pixels P16 in the background area so by following formula 13 if the pixel value of the location of pixels P16 among the supposition frame #n+1 is " B16k ":
DP16=B16/v+F01/v+F02/v+F03/v+F04/v
=α16·B16k+F01/v+F02/v+F03/v+F04/v ...(13)
The prospect composition FE16 that can represent the location of pixels P16 among the frame #n by following formula 14:
FE16=F01/v+F02/v+F03/v+F04/v
=DP16-α16·B16k ...(14)
That is to say,, utilize following formula 15 can obtain the prospect composition FEgu that does not cover the location of pixels Pgu in the background area among the frame #n so if the pixel value of the location of pixels Pg among the supposition frame #n+1 is " Bgk ":
FEgu=DPg-αu·Bk ...(15)
Separating part 431 can make prospect composition and background composition be separated each other by the mixing ratio α that utilizes area information AR that view data DVa, area identification part 41 produced and mixing ratio calculating part branch to calculate thus.
Switch sections 432 carries out switch control according to area information AR, thereby selects the data of the pixel in the motor area and provide it to composite part 434 from view data DVa.Switch sections 433 carries out switch control according to area information AR, thereby selects the data of the pixel in the quiescent centre and provide it to composite part 435 from view data DVa.
Composite part 434 comes prospect composition view data DBe is synthesized by the composition of the foreground object utilizing separating part 431 and provided and the data of the motor area that switch sections 432 is provided, and provides it to motion blur and regulate part 44.In addition, in processing, at first carry out in order in the initialization procedure that produces prospect composition view data DBe, composite part 434 with pixel value be entirely 0 original data storage in built-in frame memory, and rewrite this initial data with view data.Therefore, will be the state of initial data with the corresponding part of background area.
The data of the quiescent centre that composite part 435 is provided by the background composition that utilizes separating part 431 and provided and switch sections 433 come background composition view data DBb is synthesized and provides it to output 45.In addition, in processing, at first carry out in order in the initialization procedure that produces background composition view data DBb, composite part 435 is that 0 image is stored in the built-in frame memory with pixel value entirely, and rewrites this initial data with view data.Therefore, will be the state of initial data with the corresponding part in preceding scenic spot.
Figure 14 has provided the block diagram of motion blur adjusting part 44.Divide the 30 motion vector MVC that provide to offer motion vector detection section and regulate processing unit determining section 441 and modeling part 442.The area information AR that area identification part 41 is provided offers adjusting processing unit determining section 441.The prospect composition view data DBe that foreground/background separation part 43 is provided offers and adds part 444.
Regulate processing unit determining section 441 according to area information AR and motion vector MVC, being arranged in from covering the contiguous pixels of background area on the direction of motion that does not cover background area in the prospect component-part diagram picture is set to regulate processing unit.Perhaps, be arranged in and never cover the contiguous pixels of background area on the direction of motion that covers background area and be set to regulate processing unit.It will represent that the adjusting processing unit information HC of set adjusting processing unit offers modeling part 442 and adds part 444.Figure 15 has provided the adjusting processing unit under the situation that each location of pixels P13-P25 in the frame #n of for example Fig. 9 is set to regulate processing unit.It should be noted that, if the direction of motion with laterally or longitudinal direction different, the direction of motion can be changed into laterally or longitudinal direction by in regulating processing unit determining section 441, carrying out affine transformation so, with according to being that the identical mode of one of horizontal or vertical situation is carried out processing with it.
Modeling part 442 is carried out modeling according to motion vector MVC and set adjusting processing unit information HC.In this modeling process, can store and be included in the number of pixels regulated among the processing unit, the view data DVa virtual division number on time orientation and the corresponding a plurality of models of number of the specific prospect composition of pixel in advance, so that select to be used to specify the model M D of the correlation between view data Dva and the prospect composition according to the virtual division number on the time orientation of regulating processing unit and pixel value.
Modeling part 442 offers equation with selected model M D and produces part 443.Equation produces part 443 and produces an equation according to the model M D that modeling part 442 is provided.As mentioned above, suppose that regulating processing unit is that location of pixels P13-P25, amount of exercise v among the frame #n is that " five pixels " and virtual division number are " five ", can represent to be in prospect composition FE01 on the location of pixels C01 that regulates within the processing unit and the prospect composition FE02-FE13 on each location of pixels C02-C13 by following formula 16-28 so:
FE01=F01/v ...(16)
FE02=F02/v+F01/v ...(17)
FE03=F03/v+F02/v+F01/v ...(18)
FE04=F04/v+F03/v+F02/v+F01/v ...(19)
FE05=F05/v+F04/v+F03/v+F02/v+F01/v ...(20)
FE06=F06/v+F05/v+F04/v+F03/v+F02/v ...(21)
FE07=F07/v+F06/v+F05/v+F04/v+F03/v ...(22)
FE08=F08/v+F07/v+F06/v+F05/v+F04/v ...(23)
FE09=F09/v+F08/v+F07/v+F06/v+F05/v ...(24)
FE10=F09/v+F08/v+F07/v+F06/v ...(25)
FE11=F09/v+F08/v+F07/v ...(26)
FE12=F09/v+F08/v ...(27)
FE13=F09/v ...(28)
The equation that 443 changes of equation generation part are produced is to produce new equation.Equation produces part 443 and has produced following equation 29-41:
FE01=1·F01/v+0·F02/v+0·F03/v+0·F04/v+0·F05/v
+0·F06/v+0·F07/v+0·F08/v+0·F09/v ...(29)
FE02=1·F01/v+1·F02/v+0·F03/v+0·F04/v+0·F05/v
+0·F06/v+0·F07/v+0·F08/v+0·F09/v ...(30)
FE03=1·F01/v+1·F02/v+1·F03/v+0·F04/v+0·F05/v
+0·F06/v+0·F07/v+0·F08/v+0·F09/v ...(31)
FE04=1·F01/v+1·F02/v+1·F03/v+1·F04/v+0·F05/v
+0·F06/v+0·F07/v+0·F08/v+0·F09/v ...(32)
FE05=1·F01/v+1·F02/v+1·F03/v+1·F04/v+1·F05/v
+0·F06/v+0·F07/v+0·F08/v+0·F09/v ...(33)
FE06=0·F01/v+1·F02/v+1·F03/v+1·F04/v+1·F05/v
+1·F06/v+0·F07/v+0·F08/v+0·F09/v ...(34)
FE07=0·F01/v+0·F02/v+1·F03/v+1·F04/v+1·F05/v
+1·F06/v+1·F07/v+0·F08/v+0·F09/v ...(35)
FE08=0·F01/v+0·F02/v+0·F03/v+1·F04/v+1·F05/v
+1·F06/v+1·F07/v+1·F08/v+0·F09/v ...(36)
FE09=0·F01/v+0·F02/v+0·F03/v+0·F04/v+1·F05/v
+1·F06/v+1·F07/v+1·F08/v+1·F09/v ...(37)
FE10=0·F01/v+1·F02/v+0·F03/v+0·F04/v+0·F05/v
+1·F06/v+1·F07/v+1·F08/v+1·F09/v ...(38)
FE11=0·F01/v+0·F02/v+0·F03/v+0·F04/v+0·F05/v
+0·F06/v+1·F07/v+1·F08/v+1·F09/v ...(39)
FE12=0·F01/v+0·F02/v+0·F03/v+0·F04/v+0·F05/v
+0·F06/v+0·F07/v+1·F08/v+1·F09/v ...(40)
FE13=0·F01/v+0·F02/v+0·F03/v+0·F04/v+0·F05/v
+0·F06/v+0·F07/v+0·F08/v+1·F09/v ...(41)
Also can in following equation 42, represent these equatioies 29-41:
FEj = Σ i = 01 09 aij · Fi / v · · · ( 42 )
In equation 42, the location of pixels in the processing unit is regulated in " j " expression.In this example, any one among the j adopted value 1-13.In addition, the position of " i " expression prospect composition.In this example, any one among the i adopted value 1-9.According to the value of i and j, any one in the aij adopted value 0 and 1.
Consider error, equation 42 can be represented by following equation 43:
FEj = Σ i = 01 09 aij · Fi / v + ej · · · ( 43 )
In equation 43, ej represents to be included in the error among the object pixel Cj.Equation 43 can be rewritten into following equation 44:
ej = FEj - Σ i = 01 09 aij · Fi / v · · · ( 44 )
In order to use least squares method, be defined as following equation 45 the quadratic sum E of this error given:
E = Σ j = 01 13 ej 2 · · · ( 45 )
For error is reduced to minimum, making the partial differential value that is caused owing to the variable Fk that is used for the quadratic sum E of error is 0, thereby can obtain Fk so that it satisfies following equation 46:
∂ E ∂ Fk = 2 · Σ j = 01 13 ej · ( ∂ ej / ∂ Fk )
= 2 · Σ j = 01 13 ( ( FEj - Σ i = 01 09 aij · Fi / v ) · ( - akj / v ) ) = 0
…(46)
In equation 46, amount of exercise v fixes, so that can obtain following equation 47:
Σ j = 01 13 akj · ( FEj - Σ i = 0 09 aij · Fi / v ) = 0 · · · ( 47 )
Launch equation 47 and it transplanted so that following equation 48 to be provided:
Σ j = 01 13 ( akj · Σ i = 01 09 aij · Fi ) = v · Σ j = 01 13 akj · FEj · · · ( 48 )
By replacing k wherein and equation 48 expanded to nine equatioies with among the integer 1-9 any one.Then by utilizing a matrix these nine equatioies that obtained can be expressed as an equation.This equation is called as normal equation.
Following equation 49 has provided equation and has produced the example of part 443 according to the normal equation that least squares method produced:
5 4 3 2 1 0 0 0 0 4 5 4 3 2 1 0 0 0 3 4 5 4 3 2 1 0 0 2 3 4 5 4 3 2 1 0 1 2 3 4 5 4 3 2 1 0 1 2 3 4 5 4 3 2 0 0 1 2 3 4 5 4 3 0 0 0 1 2 3 4 5 4 0 0 0 0 1 2 3 4 5 · F 01 F 02 F 03 F 04 F 05 F 06 F 07 F 08 F 09 = v · Σ i = 01 05 FEi Σ i = 02 06 FEi Σ i = 03 07 FEi Σ i = 04 08 FEi Σ i = 05 09 FEi Σ i = 06 10 FEi Σ i = 07 11 FEi Σ i = 08 12 FEi Σ i = 09 13 FEi
…(49)
If this equation 49 is expressed as AF=vFE, the A and the v that are in so on the modeling time point are known.In addition, can learn FE by input pixel value in adding process, it is unknown staying F.
Thus by utilizing normal equation can calculate prospect composition F, therefore can eliminate the error that is included among the pixel value FE based on least squares method.The normal equation that equation generation part 443 will be produced thus offers adds part 444.
According to regulating the adjusting processing unit information HC that processing unit determining section 441 is provided, add part 444 and prospect composition view data DBe is set to equation produces in the matrix that part 443 provided.In addition, add part 444 and provide the matrix that is provided with view data in it to calculating section 445.
Calculating section 445 calculates the pixel value F01-F09 of the prospect that following prospect composition Fi/V alleviated with the motion blur that produces in it, in described prospect composition by having alleviated motion blur according to carry out processing such as the such method for solving of null method (sweeping out) (elimination of Gauss Jordan).By utilizing the center of regulating processing unit to be provided with as the picture position of benchmark to pixel value F01-F09, these pixel values F01-F09 that will be produced thus when half stage in a for example frame period offers output 45, so that prospect component-part diagram image position can not changed.That is to say, as shown in figure 16, utilize pixel value F01-F09 as each of the view data of location of pixels C03-C11, can be when 1/2 time in a frame period view data DVafc of the prospect component-part diagram picture that alleviated of motion blur in it offer output 45.
Should be noted in the discussion above that if provided the even number pixel value for example when having obtained pixel value F01-F08, two pixel value F04 of calculating section 445 output central authorities and any one among the F05 are as center of regulating processing unit.In addition, if make the time for exposure in the frame shorter, when half stage of time for exposure, provide it to output 45 so than a frame period because carry out shutter operation.
The prospect composition view data DBf that output 45 is regulated motion blur part 44 and provided is combined among the background composition view data DBb that foreground/background separation part 43 provided to produce view data DVout and with its output.Therefore the view data DVout that is produced is offered memory 55.In this case, the prospect component-part diagram that the motion blur in it has been alleviated looks like to be combined to motion vector detection section and divides on the 30 corresponding space-time of the motion vector MVC positions of detecting.That is to say, the prospect component-part diagram that motion blur has been alleviated look like to be combined to processing region information HZ represented, according on the set position of motion vector MVC, can be before motion blur be regulated with institute suitably the prospect component-part diagram that alleviated of the motion blur of setting look like to output on the picture position.
The view data DVout of the motion blurring reduction image that memory 55 storage outputs 45 are provided.Offer motion vector detection section as view data DVm and divide 30 being stored in wherein view data.
Thus, by producing the fuzzy motion blurring reduction image that has alleviated of motion motion of objects in the image, and come motion vector is detected by utilizing this motion blurring reduction image, can be by reducing because the influence that motion blur caused that exists in the following image exactly motion motion of objects vector detects the view data DVa that wherein said image is provided based on imageing sensor 10.
In addition, in a treatment of picture zone, suppose in the treatment of picture zone that when the motion object moves according to motion vector the pixel value to following each pixel carries out integration in time, do not exist in described each pixel and the corresponding motion blur of motion object, then carry out modeling, to extract mixing ratio between foreground object component and the background object composition as important information, thereby by utilizing this important information to make the component separation of motion object, with the motion blur of assigning to alleviate exactly according to the one-tenth of the motion object that has separated.
In addition because the image of the motion object that has alleviated according to motion vector and with the motion blur in it output to object pixel the position or with the corresponding position of this object pixel, therefore the image of motion object can be outputed to the appropriate location.
Simultaneously, also can alleviate motion blur by utilizing software.As another structure of the device that is used for image is handled, Figure 17 has provided by utilizing software to alleviate such a case of motion blur.The program that CPU (CPU) 61 bases are stored among read-only memory (ROM) 62 or the storage area 63 is carried out various processing.This storage area 63 by for example hard disk form with storage CPU61 performed program and Various types of data.Employed data etc. when random-access memory (ram) 64 suitably is stored in the performed program of CPU61 or Various types of data handled.CPU61, ROM62, storage area 63 and RAM64 are connected with each other by bus 65.
Input interface part 66, output interface part 67, communications portion 68 and driver 69 link to each other with CPU61 by bus 65.Input equipment such as keyboard, indicating equipment (for example mouse) or microphone links to each other with input interface 66.On the other hand, link to each other with output interface part 67 such as display or the such output equipment of loud speaker.CPU61 carries out all kinds of processing according to the order of being imported by input interface part 66.After this, CPU61 by output interface part 67 output because image that this processing obtained, voice or the like.Communications portion 68 communicates by the Internet or any other network and external equipment.This communications portion 68 is used to receive the view data DVa that exported from imageing sensor 10, obtains program or the like.When being assemblied in disk, CD, magneto optical disk or semiconductor memory on the driver 69, driver 69 drives it to obtain record or program in it or data thereon.As required, program or the data of being obtained are sent to storage area 63 so that it is stored in wherein.
Be described below with reference to the operation of the flow chart of Figure 18 the device that is used to handle image.At step ST1, CPU 61 grades by importation, Department of Communication Force and obtains the view data DVa that imageing sensor 10 is produced, and storage area 63 is stored in the view data DVa that is obtained wherein.
At step ST2, CPU61 judges whether and can come motion vector is detected by utilizing motion blurring reduction image.If do not store among storage area 63 or the RAM64 and the view data that will detect the required as many motion blurring reduction image of frame number to motion vector, and therefore can not come motion vector is detected, handle forwarding step ST3 to so by the view data of utilizing motion blurring reduction image.If store and the view data that will detect the required as many motion blurring reduction image of frame number, and therefore can come motion vector is detected, handle forwarding step ST4 to so by utilizing the view data of being stored to motion vector.
At step ST3, CPU61 is set to the motion vector detection data at the view data DVa that step ST1 is obtained, and processing forwards step ST5 to.On the other hand, at step ST4, the view data DVm of the motion blurring reduction image that CPU61 stored is set to the motion vector detection data, and processing forwards step ST5 to.
At step ST5, CPU 61 is provided with processing region according to the instruction from the outside.
At step ST6, CPU 61 comes the motion vector with the corresponding motion object of prospect OBf in the determined processing region of step ST5 is detected by utilizing the motion vector detection data.
At step ST7, CPU 61 obtains the time for exposure parameter, and handles and to forward step ST8 to, according to the time for exposure parameter motion vector that is detected at step ST6 is proofreaied and correct in step ST8, and this reprocessing forwards step ST9 to.
At step ST9, CPU 61 carries out the motion blur mitigation object images according to the motion vector of being proofreaied and correct and produce to handle so that alleviate motion blur among the motion object OBf, and produces the view data that the motion blur in the motion object has alleviated.Figure 19 has provided the flow chart of the generation processing that is used for the motion blur mitigation object images.
At step ST15, CPU61 is in the determined processing region execution area of step ST5 identification process, judging that pixel in the determined processing region belongs to background area, preceding scenic spot, covers background area and do not cover in the background area which, thereby produce area information.In the production process of area information, if frame #n is subjected to this processing, the view data of frame #n-2, #n-1, #n, #n+1 and #n+2 is used to calculate its interframe absolute difference so.Whether greater than preset threshold value Th, judge that it is included in motion parts still is among the stationary part according to this interframe absolute difference, and come execution area to judge, thereby produce area information according to this judged result.
At step ST16, CPU61 carries out the mixing ratio computing by utilizing the area information that is produced at step ST15, so that each pixel in the processing region is calculated the mixing ratio α that expression comprises the ratio of background composition, and handles and forwards step ST17 to.In the computational process of mixing ratio α, for covering background area or not covering for the pixel in the background area, the pixel value of frame #n-1, #n and #n+1 is used for obtaining estimation mixing ratio α c.In addition, mixing ratio α is set to " 1 " for background area, and mixing ratio α is set to " 0 " for preceding scenic spot.
At step ST17, according to area information that is produced at step ST15 and the mixing ratio α that calculated at step ST16, CPU61 carries out foreground/background separation and handles, and becomes prospect composition view data that branch forms and the background composition view data that only becomes branch to form by background so that view data in the processing region only is separated into by prospect.That is to say, it by the covering background area among the frame #n being carried out above-mentioned equation 12 operation and to wherein the operation that background area is carried out above-mentioned equation 15 that do not cover, can obtain the prospect composition, so that the background composition view data that view data is separated into prospect composition view data and only becomes branch to form by background.
At step ST18, CPU61 carries out motion blur adjusting processing according to motion vector after the correction that step ST8 is obtained and at the area information that step ST15 is produced, determining that expression is included in the adjusting processing unit of at least one pixel among the prospect composition view data, thereby alleviate the motion blur that is included among the prospect composition view data that step ST17 separated.That is to say that it is provided with the adjusting processing unit according to motion vector MVC, processing region information HZ and area information AR, and carry out modeling to produce normal equation according to motion vector MVC and set adjusting processing unit.It is set to view data the normal equation that is produced, and carry out the view data of handling with generation motion blur mitigation object images thereon according to null method (Gauss-Jordan's null method), that is to say to produce the prospect composition view data that motion blur has alleviated.
At step ST10, CPU61 to the result of following processing carry out output handle with produce and output as the view data DVout of the motion blurring reduction image of this result, described processing be since carry out in background composition view data that step ST17 separated and to an image will motion blur mitigation prospect component-part diagram that step ST18 is produced as data combination to the corresponding space-time of the motion vector position that is obtained at step ST8 in.
At step ST11, CPU61 judges whether the motion blur mitigation processing should finish.In this case, handle, handle so and get back to step ST2 if the image of next frame is carried out motion blur mitigation, and if do not carry out motion blur mitigation and handle, this processing finishes so.Also can handle thus by utilizing software to carry out motion blur mitigation.
Though the foregoing description has obtained the motion of objects vector that its motion blur will be alleviated, and the processing region that will include the object that its motion blur will be alleviated is separated into quiescent centre, motor area, mixed zone etc., handle to carry out motion blur mitigation by the view data of utilizing motor area and mixed zone, but need not prospect, background and mixed zone are identified and can carry out motion blur mitigation processing alleviating motion blur by obtaining each pixel motion vector.
In this case, motion vector detection section divides 30 can obtain the motion vector of object pixel and provide it to motion blurring reduction image producing part 40.In addition, it will represent that the processing region information HD of the location of pixels of object pixel offers output.
Figure 20 has provided the structure that need not prospect, background and mixed zone are identified the motion blurring reduction image producing part that can alleviate motion blur.Processing region among the motion blurring reduction image producing part 40a is provided with the object pixel set handling zone on the image that part 48 will be alleviated its motion blur in the following manner, described mode is that this processing region is alignd with the direction of motion of the motion vector of this object pixel, and after this processing region is provided with part 48 it is notified to calculating section 49.In addition, it offers output 45a with the position of object pixel.Figure 21 has provided such processing region, and it is that the center has (2N+1) individual pixel on the direction of motion that this processing region is configured to the object pixel.Figure 22 has provided the example that processing region is provided with; If for example horizontal expansion as shown by arrow B of motion vector for the pixel of the motion object OBf that will be alleviated with respect to its motion blur is arranged to processing region WA laterally shown in Figure 22 A so.On the other hand, if the motion vector diagonally extending is arranged to processing region WA the direction of suitable angle so shown in Figure 22 B.Yet, for set handling zone obliquely, must be by the corresponding pixel value of location of pixels of acquisitions such as interpolation method and processing region.
In this case, in processing region, as shown in figure 23, real world variable (Y -8..., Y 0..., Y 8) mix according to time sequencing (time-wise).Should be noted that Figure 23 has provided such a case, wherein amount of exercise v is arranged to 5 (v=5), and processing region comprises 13 pixels (N=6, wherein N is the number of pixels that is used for the processing width of object pixel).
49 pairs of these processing regions of calculating section are carried out the real world estimation, only to export the center pixel variable Y of the real world of being estimated 0The pixel value of the object pixel of having eliminated as its motion blur.
The pixel value of supposing the pixel in the processing region here is X -N, X -N+1..., X 0..., X N-1, X N, can set up (2N+1) the individual mixing equation shown in equation 50 so.In this equation, constant h is represented by making amount of exercise multiply by the value (its decimal place is cast out) of the integer part that (1/2) obtained.
Σ i = t - h t + h ( Yi / v ) = Xt · · · ( 50 )
(t=-N,··,0,··,N)
Yet what will obtain has (2N+v) individual real world variable (Y -N-h, Y 0, Y N+ h).That is to say that the equation number can not acquisition real world variable (Y according to equation 50 thereby make less than the variable number -N-h, Y 0, Y N+h).
Therefore, by utilizing following equation 51 the equation number is increased with the number greater than the real world variable, utilize least squares method can obtain the value of real world variable, wherein said equation 51 is to use the constraint equation of space correlation.
Y t-Y t+1=0 ...(51)
(t=-N-h,...,0,...,N+h-1)
That is to say, can obtain the real world variable (Y of (2N+v) individual the unknown by utilizing following (4N+v) altogether individual equation -N-h..., Y 0..., Y N+h), above-mentioned equation be by with the individual mixing equation of equation 50 represented (2n+1) with the individual constraint equation of equation 51 represented (2N+V-1) be accumulated in obtain.
Should be noted that by minimizing such mode and carry out estimation, when carrying out the fluctuation that can suppress the pixel value in the real world when motion blurring reduction image producing is handled according to the quadratic sum that makes existing error in these equatioies.
Following equation 52 expressions are provided with such a case to processing region as shown in figure 23, wherein existing error in the equation are added on each equation 50 and 51.
...(52)
Equation 52 can be become equation 53, so that obtain the Y (=Y shown in equation 55 i), this Y can make the quadratic sum E minimum of the given error of equation 54.In equation 55, T represents transposed matrix.
AY=X+e ...(53)
E=|e| 2=∑emi 2+∑ebi 2 ...(54)
Y=(A TA) -1A TX ...(55)
The quadratic sum that should be noted in the discussion above that error is for example provided by equation 56 so that carry out partial differential by quadratic sum to error, make the partial differential value can be as equation 57 given 0, therefore can obtain to make the minimized equation 55 of quadratic sum of error.
E=(A·Y-×) T(A·Y-X)
=Y T·A T·A·Y-2·Y T·A T·X+X T·X ...(56)
∂ E ∂ Y = 2 ( A T · A · Y - A T · X ) = 0 · · · ( 57 )
Equation 55 is carried out linear combination, can obtain real world variable (Y respectively -N-h..., Y 0..., Y N+h), with output center pixel variable Y 0Pixel value as the pixel value of object pixel.For example, calculating section 49 storages are the matrix (A that each amount of exercise obtained in advance TA) -1A T, and according to corresponding matrix of this amount of exercise and processing region in pixel pixel value and with the center pixel variable Y 0Pixel value output as desired value.All pixels in the processing region are carried out this processing, can obtain the real world variable that motion blur specified whole screen of user or whole zone, within its each has alleviated.
Though the foregoing description uses least squares method to obtain real world variable (Y by minimize such mode according to the quadratic sum E that makes the error among the AY=X+e -N-h..., Y 0..., Y N+h), but can provide following equation 58 so that make the equation number equal the variable number.By this equation is expressed as AY=X and makes it into Y=A -1X can obtain real world variable (Y -N-h..., Y 0..., Y N+h).
1 / v 1 / v 1 / v 1 / v 1 / v 0 0 0 0 0 0 0 0 0 0 0 0 0 1 / v 1 / v 1 / v 1 / v 1 / v 0 0 0 0 0 0 0 0 0 0 0 0 0 1 / v 1 / v 1 / v 1 / v 1 / v 0 0 0 0 0 0 0 0 0 0 0 0 0 1 / v 1 / v 1 / v 1 / v 1 / v 0 0 0 0 0 0 0 0 0 0 0 0 0 1 / v 1 / v 1 / v 1 / v 1 / v 0 0 0 0 0 0 0 0 0 0 0 0 0 1 / v 1 / v 1 / v 1 / v 1 / v 0 0 0 0 0 0 0 0 0 0 0 0 0 1 / v 1 / v 1 / v 1 / v 1 / v 0 0 0 0 0 0 0 0 0 0 0 0 0 1 / v 1 / v 1 / v 1 / v 1 / v 0 0 0 0 0 0 0 0 0 0 0 0 0 1 / v 1 / v 1 / v 1 / v 1 / v 0 0 0 0 0 0 0 0 0 0 0 0 0 1 / v 1 / v 1 / v 1 / v 1 / v 0 0 0 0 0 0 0 0 0 0 0 0 0 1 / v 1 / v 1 / v 1 / v 1 / v 0 0 0 0 0 0 0 0 0 0 0 0 0 1 / v 1 / v 1 / v 1 / v 1 / v 0 0 0 0 0 0 0 0 0 0 0 0 0 1 / v 1 / v 1 / v 1 / v 1 / v 1 - 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 - 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 - 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 - 1 0 0 0 0 0 0 0 0 0 0 0 0 Y - 8 Y - 7 Y - 6 Y - 5 Y - 4 Y - 3 Y - 2 Y - 1 Y 0 Y 1 Y 2 Y 3 Y 4 Y 5 Y 6 Y 7 Y 8 = X - 6 X - 5 X - 4 X - 3 X - 2 X - 1 X 0 X 1 X 2 X 3 X 4 X 5 X 6 0 0 0 0
...(58)
The center pixel variable Y that output 45a is obtained calculating section 49 0Pixel value bring the pixel value set in the zone shown in the processing region information HZ, that 30 object pixels that provide are provided by motion vector detection section into.In addition, if can not obtain the center pixel variable Y because having expressed background area or mixed zone 0, the pixel value of the object pixel before the generation of carrying out motion blurring reduction image is handled is used to produce view data DVout so.
In this manner,, also can estimate real world, therefore can carry out motion blurring reduction image producing processing accurately by utilizing with the corresponding motion vector of object pixel even the motion motion of objects differs from one another concerning each pixel.For example, though supposition motion to as if rigidity, also can alleviate the motion blur of the image of motion object.
Simultaneously, in the above-described embodiments, the motion blur that alleviates motion object OBf is to show its image, even so that as shown in figure 24 as motion object OBf during according to the sequential movements of Figure 24 A, 24B and 24C, therefore also can when it is followed the trail of, alleviate the motion blur of motion object OBf, can show the good image that the motion blur of the motion object OBf in it has alleviated.Yet or, by being controlled, the display position of image, can show so that motion object OBf is followed the trail of this image so that the motion blurring reduction image of motion object OBf is positioned on the precalculated position of screen according to motion object OBf.
In this case, motion vector detection section divides 30 to make according to motion vector MV and to select shown in the information HA tracking point motion set in the zone in the zone, offers output 45 with the coordinate information HG that will represent this motion tracking point afterwards.Output 45 produces view data DVout so that the tracking point shown in the coordinate information HG is positioned on the precalculated position of screen.Can show image thus, just look like that motion object OBf is just being followed the trail of equally.
In addition, even motion object OBf moves shown in Figure 25 A-25C, also can work as shown in Figure 25 D-25F when motion object OBf followed the trail of, according to utilizing motion blurring reduction image data DVout to produce expanded images as the set such motion object OBf of tracking point in zone shown in the zone selection information HA, and come it is shown the expanded images of exportable motion object OBf according to the such mode in precalculated position that tracking point is positioned on the screen.In this case, because the expanded images of motion object OBf is shown with the size until the picture frame of this image, even therefore the display image motion also can prevent to occur not display part so that tracking point is positioned on the precalculated position of screen on screen.In addition, in the expanded images production process, the pixel value of the pixel that has alleviated by the motion blur that makes in it repeats to produce expanded images.For example, repeat twice, can produce the expanded images that its vertical and horizontal size doubles by making each pixel value.In addition, the mean value by using neighbor etc. can be placed new pixel with the generation expanded images as new pixel value between neighbor.In addition, by utilizing motion blurring reduction image to create spatial resolution, the high definition expanded images that exportable motion blur is littler.Be described to produce expanded images such a case carrying out the spatial resolution establishment below.
Figure 26 has provided another structure that is used for device that image is handled, creates to allow the output expanded images by this device executable space resolution.It should be noted, in Figure 26 to Fig. 5 in the corresponding similar parts of parts represent by similarity sign, and omit detailed description thereof.
Divide the 30 coordinate information HG that produce to offer spatial resolution motion vector detection section and create part 70.In addition, the view data DVout of the motion blurring reduction image of also motion blurring reduction image producing part 40 being exported offers spatial resolution and creates part 70.
Figure 27 has provided spatial resolution and has created structure partly.Motion blurring reduction image data DVout is offered spatial resolution create part 70.
Spatial resolution is created part 70 and comprised: category classification part 71 is used for the object pixel of view data DVout is classified; Predictive coefficient memory 72 is used to export and the corresponding predictive coefficient of the classification results of category classification part 71; Prediction calculating section 73 is used for producing interpolating pixel data DH by utilizing predictive coefficient that predictive coefficient memory 72 exported and view data DVout to carry out prediction and calculation; And expanded images output 74, be used for dividing the 30 coordinate information HG that provide to read and the image of the as many object OBj of display pixel and the view data DVz of output expanded images according to motion vector detection section.
View data DVout is offered the classification pixel groups cutting out section 711 in the category classification part 71, predict pixel group cutting out section 731 and the expanded images output 74 in the prediction calculating section 73.Classification pixel groups cutting out section 711 cuts off and carries out the necessary pixel of category classification (sports category) in order to represent movement degree.The pixel groups that classification pixel groups cutting out section 711 is sheared offers classification value determining section 712.Classification value determining section 712 is calculated the relevant frame-to-frame differences of pixel data of the pixel groups of being sheared with classification pixel groups cutting out section 711, and, thereby determine classification value CL for example by these averages and a plurality of preset threshold value being compared and the absolute mean of these frame-to-frame differences is classified.
Predictive coefficient memory 72 is stored in predictive coefficient wherein, and will offer prediction calculating section 73 with the corresponding predictive coefficient KE of category classification part 71 determined classification value CL.
Predict pixel group cutting out section 731 in the prediction calculating section 73 cuts off employed pixel data in prediction and calculation (the being prediction tapped) TP in the middle of the view data DVout, and provides it to computing part 732.Computing part 732 is carried out the one-dimensional linear operation by predictive coefficient KE and the prediction tapped TP that utilizes predictive coefficient memory 72 and provided, thereby calculate and the corresponding interpolating pixel data of object pixel DH, and provide it to expanded images output 74.
Expanded images output 74 by from view data Dvout and interpolating pixel data DH, read with the as many pixel data of display size produce expanded images view data DVz and with its output so that the position based on coordinate information HG can be positioned on the precalculated position of screen.
By such generation expanded images and interpolating pixel data DH and view data DVout that utilization produced, the expansion high quality graphic that exportable wherein motion blur has alleviated.For example, by producing interpolating pixel data DH and horizontal and vertical number of pixels being doubled, the high quality graphic that exportable its such motion blur has alleviated is so that double the vertical and horizontal of motion object OBf.
Should be noted that can be by utilizing facility for study shown in Figure 28 to create to be stored in the predictive coefficient in the predictive coefficient memory 72.Be also noted that in Figure 28 to Figure 27 in the corresponding similar parts of parts represent by similarity sign.
Facility for study 75 comprises category classification part 71, predictive coefficient memory 72 and coefficient calculations part 76.Provide view data GS to each of category classification part 71 and coefficient calculations part 76 by the student's image that number of pixels produced that reduces teacher's image.
Category classification part 71 is the necessary pixel of category classification by utilizing classification pixel groups cutting out section 711 to cut off from the view data GS of student's image, and come the pixel groups of being sheared is classified by the pixel data that utilizes this group, thereby determine its classification value.
Student's pixel groups cutting out section 761 in the coefficient calculations part 76 cuts off employed pixel data in calculating the predictive coefficient process from the view data GS of student's image, and provides it to predictive coefficient study part 762.
The view data GT of predictive coefficient study part 762 by utilizing teacher's image, produce a normal equation from the view data of student's pixel groups cutting out section 761 and predictive coefficient and for each classification shown in the classification value that is provided by category classification part 71.In addition, it comes normal equation is found the solution according to predictive coefficient such as the such general matrix solution of null method by utilizing, and with the coefficient storage that obtained in predictive coefficient memory 72.
Figure 29 and 30 has all provided spatial resolution has been created the operational flowchart that such a case is made up in processing.
At step ST21, CPU61 obtains view data DVa, and processing forwards step ST22 to.
At step ST22, CPU61 is provided with processing region, and processing forwards step ST23 to.
At step ST23, the CPU61 variable i is set to 0 (i=0), and processing forwards step ST24 to.
At step ST24, whether CPU61 judgment variable i is not equal to 0 (i ≠ 0).If not i ≠ 0, handle so and forward step ST25 to, and if i ≠ 0, handle so and forward step ST29 to.
At step ST25, CPU61 pair with detect at the relevant motion vector of the set processing region of step ST22, and handle and to forward step ST26 to.
At step ST26, CPU61 obtains the time for exposure parameter, and handles and to forward step ST27 to, according to the time for exposure parameter motion vector that is detected at step ST25 is proofreaied and correct in this step, and is handled and forward step ST28 to.
At step ST28, CPU61 carries out motion blur mitigation object images shown in Figure 19 by utilization correction back motion vector and view data DVa and produces processing, and with generation motion motion of objects blur reduction image, and processing forwards step ST33 to.
At step ST33, CPU61 carries out stores processor to this result, and it is similar with the step ST10 of Figure 18, be combined in the background compositional data with the view data of the corresponding time-space of the motion vector position that is obtained at step ST27 the motion blur mitigation foreground image, thereby as the result of this processing and produce view data DVout.In addition, it is stored in view data Dvout among storage area 63 or the RAM64 as result, and processing forwards step ST34 to.
At step ST34, CPU61 makes processing region motion being provided with following the trail of the reprocessing zone according to the motion motion of objects, and handles and forward step ST35 to.In the setting up procedure of following the trail of the reprocessing zone, for example, detect and use the motion vector MV of motion object OBf.Perhaps, use the motion vector that is detected at step ST25 or ST29.
At step ST35, the CPU61 variable i is set to i+1 (i=i+1), and processing forwards step ST36 to.
At step ST36, CPU61 judges whether to have stored and the as many result of the motion vector that is detected.If judge the as many motion blurring reduction image data of the frame number Dvout that does not store and be used to allow motion vector is detected, handle so and get back to step ST24 in this step.
If handle and get back to step ST24 so that CPU61 carries out its processing from step ST36, variable i is not equal to 0 (i ≠ 0) so that handle and to forward step ST29 to so, in this step to forwarding step ST30 to detecting and handle at the relevant motion vector in the set tracking reprocessing zone of step ST34.
At step ST30-ST32, CPU61 carry out with in the identical processing of the performed processing of step ST26-ST28, and processing forwards step ST33 to.CPU61 repeats the processing that begins from step ST33, up to judge at step ST36 stored with the as many result of the motion vector that is detected till, therefore handle and forward step ST37 to from the step ST36 of Figure 30.
At step ST37, the CPU61 variable i is set to 0 (i=0), and processing forwards step ST38 to.
At step ST38, PU21 judges whether i is not equal to 0 (i ≠ 0), and if not i ≠ 0, handles so and forward step ST39 to, and if i ≠ 0, handle so and forward step 43 to.
At step ST39, CPU61 by utilize the result stored come to detect at the relevant motion vector of the set processing region of step ST22, and handle and forward step ST40 to.
At step ST40, CPU61 obtains the time for exposure parameter, and handles and to forward step ST41 to, according to the time for exposure parameter motion vector that is detected at step ST39 is proofreaied and correct in this step, and is handled and forward step ST42 to.
At step ST42, CPU61 carries out the motion blur mitigation object images and produces and handle by utilize proofreading and correct back motion vector and view data DVa, producing the image of the motion object that motion blur wherein alleviated, and handles and forwards step ST47 to.
At step ST47, ST33 is similar with step, and CPU61 exports this result and it is stored, with the view data DVout of generation as result, and with its output.In addition, it is stored in the view data DVout that is produced among storage area 63 or the RAM64.
At step ST48, CPU61 creates processing by utilizing the view data DVout that is produced at step ST47 to carry out spatial resolution, to produce the view data DVz of the expanded images with screen size in the following manner, described mode is that the position shown in the coordinate information HG is positioned on the constant position of screen.
At step ST49, ST34 is similar with step, and CPU61 is provided with and follows the trail of the reprocessing zone, and processing forwards step ST50 to.
At step ST50, the CPU61 variable i is set to i+1 (i=i+1), and processing forwards step ST51 to.
At step ST51, whether the CPU61 judgment processing finishes.Do not finish if handle, handle so and get back to step ST38.
If handle and get back to step ST38 so that the processing of CPU61 execution in step ST38 from step ST51, variable i is not equal to 0 (i ≠ 0) so, therefore handle and forward step ST43 to, the result of being stored by utilization in this step comes the motion vector relevant with following the trail of the reprocessing zone detected, and processing forwards step ST44 to.
At step ST44-ST46, CPU61 carries out the processing identical with step ST40-ST42, and processing forwards step ST47 to, the processing that execution begins from step ST47 in this step.After this, if view data DVa has finished or carried out its complete operation, finished and end process in step ST51 decision so.
In addition, it not only can be created by using motion blurring reduction image data DVout to carry out spatial resolution, but also can carry out disclosed temporal resolution establishment in the Japanese patent application that publication number is 2002-199349, thereby produce high quality graphic with high time resolution.Figure 31 has provided the structure of following apparatus, and described device is used for image is handled, but achieve frame speed conversion thus, but time of implementation resolution is created simultaneously.In Figure 31, represent by similarity sign to the corresponding similar parts of the parts among Fig. 5, and will omit detailed description thereof.
The frequency information HF of the frame speed before and after express time resolution created offers motion vector detection section and divides 30a and temporal resolution to create part 90.Shown in figure 32, motion vector detection section divides 30a to comprise that above-mentioned motion vector detection section shown in Figure 6 divides 30 and add motion vector distribution portion 35 on it to.In Figure 32, represent by similarity sign to the corresponding similar parts of the parts among Fig. 6, and will omit detailed description thereof.
According to motion vector MV, the motion vector MVD of the up-to-date two field picture that produces is provided according to the frequency information HF that is provided motion vector distribution portion 35.For example, by utilizing motion vector MV, it distributes to the pixel of the up-to-date two field picture that produces with motion vector, and will distribute back motion vector MVD to offer temporal resolution and create part 90.
Frequency information HF is such information, this information representation such as twice rate conversion, 2.5 times of rate conversion, the so-called such frame rate switching rates such as 24-60 conversion that are used for the 24P image transitions is become the 60P image.
Below the processing that is used for assigned motion vector under twice rate conversion such a case of representing frame for example at frequency information HF is described.In this case, as shown in figure 33, up-to-date at view data DVa two frame RFa and RFb between produced the image of two frame RFn0 and RFn1.With each image setting of two frames of up-to-date generation is target frame.For each pixel of target frame, the motion vector that crosses among the motion vector MV that is provided is provided from the motion vector MV of the 30 view data DVa that provide is provided motion vector detection section motion vector distribution portion 35, and the motion vector that is detected is assigned as the motion vector MVC of target frame image.For example, if in the pixel PGn0x of target frame PFn0, for example motion vector MV-j crosses the image-region PWn0x of pixel PGn0x, so this motion vector MV-j is assigned as the motion vector MVC-n0x of pixel PGn0x.In addition, if there are a plurality of motion vectors to cross, calculate this so and a plurality ofly cross the mean value of motion vector and it is distributed.In addition, do not cross motion vector if detect, so dispensed give periphery or neighborhood pixels motion vector mean value or make its weighting and calculate its mean value so that it is distributed.In this manner, motion vector is distributed to all pixels of each target frame.
In addition, processing region information HZ and the motion vector MVC that motion vector detection section is divided 30a produce offers motion blurring reduction image producing part 40.In addition, not only will offer motion vector detection section from the view data DVm that memory 55 is read and divide 30a, select part 85 but also offer view data.
View data selects part 85 to select view data DVa that imageing sensors 10 are provided and from the view data DVm that memory 55 is read any one, and it is offered temporal resolution as view data DVs creates part 90.
Temporal resolution is created part 90 and is created the view data DVt with expectation frame speed according to view data DVs, motion vector MVD and frequency information HF.
Figure 34 has provided temporal resolution and has created structure partly.Temporal resolution is created part 90 and comprised: category classification part 91 is used for the object pixel of view data DVs is classified; Predictive coefficient memory 92 is used to export the corresponding predictive coefficient of category classification result with category classification part 91; And prediction calculating section 93, be used for producing the frame interpolation pixel data by utilizing predictive coefficient that predictive coefficient memory 92 exported and view data DVs to carry out prediction and calculation.
View data DVs is offered classification pixel groups cutting out section 913 in the category classification part 91 and the predict pixel group cutting out section 931 in the prediction calculating section 93.Frequency information HF is offered temporal mode value determining section 911.In addition, the motion vector MVD that distributes to the object pixel in the frame that will create is offered temporal mode value determining section 911 and mode position value determining section 915.
Temporal mode value determining section 911 is determined the temporal mode value TM of the time location of the frame that expression will be created according to the frequency information HF that is provided, and provides it to tapped centre position determining part 912, mode position value determining section 915 and predictive coefficient memory 92.Figure 35 has schematically provided it, and each all is used for the schematic diagram of the operation of description time mode value determining section 911.Temporal mode value determining section 911 is determined the temporal mode value relevant with the time location of the target frame that will create according to the frequency before and after the conversion.
Figure 35 A shows frame rate and doubles such a case.In this case, as mentioned above, between two the frame RFa of view data DVa and RFb, create two frame RFn0 and RFn1 as target frame.According to any one of two frame RFn0 that will create and RFn1, given pattern 0 or pattern 1.For example, in two frames, in order to create the pixel value that is arranged in other times target frame RFn0 before, the temporal mode value is set to 0, and in order to create the pixel value among another target frame RFn1, the temporal mode value is set to 1.On the other hand, Figure 35 B shows and makes frame frequency multiply by 2.5 such a case in transfer process, create pixel value in this case on four kinds of time locations of target frame, so that according to the coefficient of any one frame position that will create, the temporal mode value adopts any one in 0 to 3.
Tapped centre position determining part 912 uses motion vector MVD to determine the motion vector of the object pixel in the target frame shown in the temporal mode value TM.That is to say that if motion vector distribution portion 35 has been distributed to motion vector the pixel of the up-to-date two field picture that produces, it is selected and the corresponding motion vector of this object pixel so.According to determined motion vector, to lay respectively at target frame in the corresponding view data DVs of object pixel before and after two frames in the position detect, and it is set to tapped centre position TC.
Classification pixel groups cutting out section 913 uses tapped centre position TC as benchmark, carry out the necessary pixel of category classification in order to represent movement degree in the middle of two frames that lay respectively at the view data DVs before and after the target frame, to cut off, and provide it to classification value determining section 914.
Figure 36 has provided some examples of the classification pixel groups that classification pixel groups cutting out section 913 taken out according to above-mentioned tapped centre position TC.Should be noted in the discussion above that in the accompanying drawings, represent the pixel of tapped centre position TC, and represent to be positioned at around the TC of tapped centre position and as the pixel of classification pixel by the circle that " X " arranged in it by solid circles.In the middle of two frames that lay respectively at the view data DVs before and after the respective objects frame, cut off this classification pixel groups.
The pixel data of the pixel groups that 914 pairs of classification pixel groups of classification value determining section cutting out section 913 is sheared calculates frame-to-frame differences, and for example the absolute average of these frame-to-frame differences and a plurality of preset threshold value are compared so that it is classified, thereby determine classification value CM.
Figure 37 has schematically provided the classification value and has determined to handle.Classification value determining section 914 is encoded to the pixel value of the classification pixel groups sheared according to a for example ADRC (adaptability dynamic range coding), and following value is set to the classification value, and result's (bit string) of this coding of supposition is an integer value in described value.
Should be noted that if come the remarked pixel value by 8, can adopt among the 0-255 any one so as pixel value.Supposition cuts off 5 pixels from each of two frames laying respectively at the view data DVs before and after the target frame in Figure 37, and 10 pixels have constituted the classification pixel groups altogether.The maximum and the difference between the minimum value of the classification pixel value of these 10 pixels have constituted dynamic range DR.Because an ADRC, therefore can be set to median CLV by the value that dynamic range DR is reduced by half obtained so that can check the classification pixel value be greater than or less than this median CLV.If the classification pixel value less than median CLV, is encoded into it " 0 " so, and if the classification pixel value be not less than median CLV, so it is encoded into " 1 ".In the example of Figure 37, because the coded bit string value of result of an ADRC is 0000100001.The integer value (=33) of this bit string is set to the classification value.
In order to reduce the classification number, the value that the bit string counter-rotating of the coding result by making each is obtained can be used as the classification value.In this case, the classification number reduces by half.In addition, if tap arrangement is become vertically/lateral symmetry, so pixel value is reset with the execution identical calculations, thereby the classification number is reduced by half.
Mode position value determining section 915 is determined mode position value HM according to tapped centre position TC, motion vector MVD and temporal mode value TM, and provides it to predictive coefficient memory 92.Position according to the motion vector that crosses the object pixel on the target frame and this object pixel is provided with tapped centre position TC, so that if the position of representing each pixel center is as integer grid point position, the skew that tapped centre position TC has fractional part (be not more than pel spacing from) for integer grid point position so in some cases.Therefore, mode position value determining section 915 is determined the mode position value by carrying out category classification according to this fractional part skew.Should be noted in the discussion above that if 5 times of frame rate increases and a motion vector cross object pixel, the skew of the fractional part for integer grid point enters any one of 0,0.2,0.4,0.6 and 0.8 these 5 patterns so.If at each laterally or provided this fractional part skew on the longitudinal direction, can be the combination of 5 * 5=25 kind so, so as according to the fractional part skew corresponding any one make up to determine mode position value HM.
Predictive coefficient memory 92 reads and offers its temporal mode value TM, mode position value HM and the corresponding predictive coefficient KF of combination of classification value CM, and this coefficient is offered computing part 932 in the prediction calculating section 93.
Predict pixel group cutting out section 931 in the prediction calculating section 93 cuts off employed prediction tapped TF in the prediction and calculation process the view data DVs before the conversion for tapped centre position determining part 912 determined tapped centre position TC, and provides it to computing part 932.Computing part 932 is carried out the one-dimensional linear operation by predictive coefficient KF and the prediction tapped TF that utilizes predictive coefficient memory 92 and provided, thereby produces conversion back view data DVt.
Motion vector is detected and by utilizing the motion vector creation-time resolution that is detected by the view data DVm of such utilization after motion blurring reduction image producing is handled, can be in the image after the conversion of frame speed exactly demonstration move.In addition, if view data selects part 85 to select view data DVm as view data DVs, can produce the image of new frame so by the image that utilizes motion blur to alleviate.For example, if view data DVout had for 24 frame/seconds, but then time of implementation resolution is created, thus the view data DVt of the image of 60 frame/seconds that the generation motion blur has alleviated.In addition, if elect view data DVa as view data DVs, view data DVt has provided such image so, and this image has the converted frames speed of the image that is obtained by imageing sensor 10.
Figure 38 has provided the temporal resolution that realizes by software and has created the flow chart of handling.At step ST61, CPU61 determines temporal mode value TM according to frequency information HF.At step ST62, CPU61 carries out the processing that is used for determining the tapped centre position.
Figure 39 has provided the definite flow chart of handling in tapped centre position.At step ST621, CPU61 determines the position of the object pixel on the target frame.At step ST622, CPU61 calculates and the corresponding position of object pixel.That is to say, according to the motion vector of object pixel set in the target frame shown in the temporal mode value TM, with the corresponding position of object pixel in decimal place accuracy computation and two frames that lay respectively at the view data DVs before and after the target frame.At step ST623, following location of pixels is defined as tapped centre position TC, described location of pixels the most close with the corresponding institute of object pixel calculating location.
After finishing the definite processing in tapped centre position, at step ST63, CPU61 determines mode position value HM.In the process of determining mode position value HM, calculate and the corresponding position of object pixel with the decimal place accuracy computation at step ST622, and convert the difference between this position and the nearest location of pixels to mode position value HM.
At step ST64, CPU61 is according to shearing the classification pixel groups at step ST62 and the determined tapped centre of step ST65 position TC, and CPU61 determines classification value CM according to the classification pixel value group of being sheared.
At step ST66, CPU61 is according to shearing the predict pixel group at the determined tapped centre of step ST62 position TC.At step ST67, CPU61 reads predictive coefficient according to classification value CM, mode position value HM and temporal mode value TM.At step ST68, CPU61 carries out one-dimensional linear combination (prediction and calculation) to a plurality of pixels in the predict pixel group and predictive coefficient, to produce the data of the object pixel in the target frame.At step ST69, the data of the object pixel that CPU61 output is produced are with as view data DVt.
At step ST70, CPU61 judges whether the pixel in the target frame is all processed.If be untreated, handle so and get back to step ST62, and if the pixel in the target frame all processed, finish this processing so.
Can create the predictive coefficient that is stored in the predictive coefficient memory 92 by utilizing facility for study shown in Figure 40.Should be noted that in Figure 40 to Figure 34 in the corresponding similar parts of parts represent by similarity sign.
At first, come frame speed is changed to produce the view data GS of student's image (image of itself and view data DVs is corresponding), so that view data GS is offered category classification part 94 and coefficient calculations part 95 by the view data GT that utilizes teacher's image (its image with target frame is corresponding).
Motion vector detection section in the category classification part 94 divides the motion vector between the frame of 941 pairs of predetermined numbers to detect, and provides it to tapped centre position determining part 912 and mode position value determining section 915.Tapped centre position determining part 912 is determined the tapped centre position as mentioned above, and provides it to classification pixel groups taking-up part 913 and student's pixel groups taking-up part 951.
Student's pixel is taken out part 951 and is cut off student's pixel groups of being made up of a plurality of student's pixels according to the tapped centre position in the middle of view data GS.Student's pixel groups of being sheared is offered predictive coefficient study part 952.
The classification pixel is taken out part 913 and is taken out the classification pixel groups of being made up of a plurality of student's pixels according to the tapped centre position.The classification pixel groups of being taken out is offered classification value determining section 914.Classification value determining section 914 is determined the classification value according to the classification pixel groups as mentioned above.Determined classification value is offered predictive coefficient study part 952.
Mode position value determining section 915 is determined the mode position value according to tapped centre position, motion vector and temporal mode as mentioned above and is provided it to predictive coefficient study part 952.In addition, according to the temporal mode value, teacher's pixel cutting out section 942 cuts off teacher's pixel.Teacher's pixel of being sheared is offered predictive coefficient study part 952.
Predictive coefficient study part 952 is used temporal mode value, mode position value, classification value, student's study group and the teacher's pixel that offers it, is used for predicting according to student's pixel groups the predictive coefficient of teacher's pixel with study.In the process of study predictive coefficient, determine predictive coefficient, according to the one-dimensional linear of student's pixel and a plurality of predictive coefficients is operated when estimating predicted value, the predicted value in teacher's image and the quadratic sum of the error between the actual value are minimized with box lunch.By the Practical Calculation method, determine predictive coefficient, become 0 so that make by the equation relevant with error sum of squares carried out the value that partial differential obtained.In this case, set up above-mentioned normal equation, and it is found the solution to calculate predictive coefficient by coming such as the such general matrix solution of null method.The predictive coefficient that is calculated is stored in the predictive coefficient memory 92.
Figure 41 has provided by utilizing software to learn the process chart of predictive coefficient.At step ST81, CPU61 comes frame speed is changed to produce the view data of student's image by the view data of utilizing teacher's image.At step ST82, CPU61 determines the temporal mode value according to frequency information.
At step ST83, CPU61 detects the motion vector of teacher's image, and at step ST84, determines the tapped centre position according to temporal mode value and motion vector.
At step ST85, CPU61 determines the mode position value according to motion vector, tapped centre position, temporal mode value.
At step ST86, CPU61 comes to cut off the classification pixel groups in the middle of student's image according to the information of tapped centre position.At step ST87, CPU61 determines the classification value according to the classification pixel groups.At step ST88, CPU61 comes to cut off student's pixel groups in the middle of student's image according to the information of tapped centre position.At step ST89, CPU61 cuts off teacher's pixel in the middle of teacher's image.
Step ST90-ST95 has constituted the processing of learning predictive coefficient according to least squares method.That is to say, determine predictive coefficient, minimize according to the one-dimensional linear of student's pixel and a plurality of predictive coefficients being made up the predicted value that can make when estimating predicted value in teacher's image and the error sum of squares between the actual value with box lunch.By the Practical Calculation method, determine predictive coefficient, become 0 so that make by the equation relevant with error sum of squares carried out the value that partial differential obtained.In this case, set up a normal equation and it is found the solution to calculate predictive coefficient by coming such as the such general matrix solution of null method.
At step ST90, CPU61 carries out and to be used for data are added processing on the normal equation that is used for each classification.At step ST91, CPU61 determines whether the pixel in the frame is all processed.If not processed, handle so and get back to step ST84 (determining the tapped centre position).If the pixel in the frame is all processed, handles so and forward step ST92 to.At step ST92, CPU61 determines whether the frame in the image is all processed.If not processed, step ST82 is got back in this processing, and if frame all processed, handle forwarding step ST93 to so.At step ST93, CPU61 determines whether input picture is all processed.If not processed, handle so and get back to step ST81.If input picture is all processed, handles so and forward step ST94 to.At step ST94, CPU61 comes normal equation is found the solution by null method, and the predictive coefficient that is obtained in step ST95 output is so that be stored in it in predictive coefficient memory 92.
Figure 42 has provided the operational flowchart that makes up under such a case has been handled in above-mentioned temporal resolution establishment.
At step ST101, CPU61 determines whether to store and the as many result that is used to alleviate motion vector of motion vector that is detected.If not storage is handled so and is got back to step ST101, and if store and the as many result of motion vector that is detected, handle forwarding step ST102 to so.
At step ST102, CPU61 uses the result stored so that motion vector is detected, and handles and forward step ST103 to.
At step ST103, CPU61 uses the motion vector detected with each pixel in the frame of motion vector being distributed to the up-to-date establishment of institute, and processing forwards step ST104 to.
At step ST104, CPU61 comes motion vector is proofreaied and correct according to the time for exposure parameter that alleviates in the processing to be obtained at motion vector, and handles and forward step ST105 to.
At step ST105, CPU61 creates according to the result of being stored with in the motion vector time of implementation resolution that step ST104 is obtained and handles, and have the view data DVt that temporal resolution is created back frame speed with generation, and processing forwards step ST106 to.
At step ST106, if view data DVa also finishes or operation finishes, handle so and get back to step ST102, and if view data DVa finish or operate and finish, decision has been finished and has been finished this processing so.
Thus, utilize motion blurring reduction image to detect motion vector exactly, therefore by utilizing the motion vector that is detected to obtain to have the high time resolution image of less motion blur.
In addition, if following regional selection information HA is offered image processing apparatus 20, described regional selection information HA indicates to be subjected to the image-region that motion blur is handled, and 40 pairs of motion blurring reduction image producing part are selected the selected zone of information HA to carry out motion blur mitigation according to the zone to handle so.In addition, divide the 30 motion vector MV that detect, make the processing region continuous motion according to motion vector detection section.That is to say that motion vector detection section divides 30 can follow the trail of the motion object.In this manner, only by one zone at first being set according to the motion motion of objects, can make the image-region motion of being handled according to the mode that is complementary with the motion motion of objects, and can only alleviate the motion blur in the zone that includes the motion object, therefore can effectively alleviate motion blur.
Figure 43 has provided the operational flowchart under regional such a case of having selected motion blur to alleviate.
At step ST111, CPU61 grades by importation, Department of Communication Force and obtains the view data DVa that imageing sensor 10 is produced, and the view data DVa that is obtained can be stored in the storage area 63.
At step ST112, CPU61 detects each pixel motion vector MV, and processing forwards step ST113 to.
At step ST113, CPU61 obtains time for exposure Parameter H E, and handles and to forward step ST114 to, comes the motion vector MV that is detected at step ST112 is proofreaied and correct according to time for exposure Parameter H E in this step, produces motion vector MVB thus.
At step ST115, CPU61 has determined whether to select wherein will alleviate the image-region of motion blur.If non-selected this image-region is handled so and forwarded step ST116 to, in this step, whole screen is carried out motion blurring reduction image producing and handle, and this reprocessing forwards step ST119 to generation view data DVct.If selected this image-region, handle so and forward step ST117 to, in this step, selected zone is upgraded.For example, by the selected regional movement that its motion blur will be alleviated according to the motion motion of objects in institute's favored area, institute's favored area can be followed the trail of the motion object.At step ST118, institute's favored area of moving according to the motion motion of objects is carried out motion blurring reduction image producing handle, and processing forwards step ST119 to generation view data DVct.
At step ST119, CPU61 determines whether to store the view data DVct of the as many motion blurring reduction image of creating with time of implementation resolution of frame number.If not storage is handled so and is got back to step ST112, and if stored, handle forwarding step ST120 to so.
At step ST120, according to frequency information HF, CPU61 distributes to the pixel of the frame that will produce by utilizing the motion vector MV that is detected at step ST112 with motion vector, so produces motion vector MVC.
At step ST121, by view data DVct that utilizes frequency information HF, motion blurring reduction image and the motion vector MVC that is distributed, CPU61 time of implementation resolution create to be handled to produce the view data DVft of frame that will up-to-date establishment.
At step ST122, CPU61 determines whether this processing should finish.If view data DVa is not over yet or do not carry out end operation yet, handles so and get back to step ST112.If view data DVa has finished or executed end operation, finish this processing so.
This is handled by such execution, can alleviate the motion blur of selecting the selected zone of information HA according to the zone, but and uses motion blurring reduction image so that time of implementation resolution is created.
Industrial applicibility
As mentioned above, involved in the present invention for the treatment of the device of image, for the treatment of image Method, with and program can be used on the detection of motion vector and by utilizing the fortune detected In the image processing process of dynamic vector, be highly suitable for thus the image of Moving Objects is processed.

Claims (11)

1, a kind of device that image is handled of being used for, described device comprises:
Device for detecting motion vector, be used for by utilize form by a plurality of pixel and come motion vector is detected by the image that imageing sensor obtained with time integral effect;
The temporal resolution creation apparatus is used for having than the image of being made up of an a plurality of pixel image of high time resolution more by utilizing motion vector that device for detecting motion vector detects and being produced by the image that a plurality of pixel is formed; And
The motion blurring reduction image producing device, the pixel value of the motion object pixels in the supposition image is by carrying out under the situation of the value that integration obtained at the pixel value to following each pixel on the time orientation, wherein said each pixel wherein do not exist when the motion object motion and the corresponding motion blur of motion object, and this motion blurring reduction image producing device is used for producing by the motion vector that utilizes device for detecting motion vector to detect and has wherein alleviated the fuzzy motion blurring reduction image of motion motion of objects.
2, according to the device that image is handled of being used for of claim 1, wherein device for detecting motion vector by utilize its each all form by a plurality of pixel and come motion vector is detected by a plurality of images that imageing sensor obtained, and be to have more that the image of high time resolution produces a motion vector so that this motion vector is offered the temporal resolution creation apparatus according to the motion vector that is detected.
3, according to the device that image is handled of being used for of claim 1, wherein device for detecting motion vector by utilize its each all form by a plurality of pixel and come motion vector is detected by a plurality of images that imageing sensor obtained, the motion vector that is detected is proofreaied and correct according to the time for exposure, and provided it to the motion blurring reduction image producing device.
4, according to the device that image is handled of being used for of claim 1, wherein the temporal resolution creation apparatus uses motion blurring reduction image to have than the motion blurring reduction image image of high time resolution more with generation.
5, according to the device that image is handled of being used for of claim 4, wherein the temporal resolution creation apparatus comprises:
Classification is determined device, be used for determining the more motion vector of the object pixel of the image of high time resolution that has that to create by the motion vector that utilizes device for detecting motion vector to detect, from motion blurring reduction image, extract with the corresponding a plurality of pixels of object pixel as the classification tap, and determine and the corresponding classification of object pixel according to the pixel value of classification tap;
Storage device, be used between first image and second image, learning following predictive coefficient so that the predictive coefficient that is used for each classification that is produced is stored for each classification, each all is used for described predictive coefficient coming the target of prediction pixel according to a plurality of pixels of first image, described first image has and the corresponding temporal resolution of motion blurring reduction image, described second image has the temporal resolution higher than first image, and the described a plurality of pixels in described first image are corresponding with the object pixel in second image; And
The predicted value generation device, be used for detecting with classification and determine the corresponding predictive coefficient of the determined classification of device from storage device, from motion blurring reduction image, extract with the image that will produce in the corresponding a plurality of pixels of object pixel as prediction tapped, and make up according to the one-dimensional linear of the predictive coefficient that from storage device, is detected and prediction tapped and to produce and the corresponding predicted value of object pixel.
6, a kind of method that image is handled of being used for, described method comprises:
The motion vector detection step is used for by utilizing to be come motion vector is detected by the image of being made up of a plurality of pixel that imageing sensor obtained with time integral effect;
The temporal resolution foundation step is used for having than the image of being made up of a plurality of pixel image of high time resolution more by utilizing the motion vector that detected in the motion vector detection step and being produced by the image that a plurality of pixel is formed; And
The motion blurring reduction image producing step, the pixel value of the motion object pixels in the supposition image is by carrying out under the situation of the value that integration obtained at the pixel value to following each pixel on the time orientation, wherein said each pixel does not wherein exist when the motion object motion and the corresponding motion blur of motion object, and this step is used for wherein having alleviated the fuzzy motion blurring reduction image of motion motion of objects by utilizing the motion vector that is detected in the motion vector detection step to produce.
7, according to the method that image is handled of being used for of claim 6, wherein in the motion vector detection step, by utilize its each all form by a plurality of pixel and come motion vector is detected by a plurality of images that imageing sensor obtained, and be to have more that the image of high time resolution produces a motion vector and this motion vector is offered the temporal resolution foundation step according to the motion vector that is detected.
8, according to the method that image is handled of being used for of claim 6, wherein in the motion vector detection step, by utilize its each all form by a plurality of pixel and come motion vector is detected by a plurality of images that imageing sensor obtained, and the motion blurring reduction image producing step is proofreaied and correct and provided it to the motion vector that is detected according to the time for exposure.
9,,, use motion blurring reduction image to have than the motion blurring reduction image image of high time resolution more with generation wherein at the temporal resolution foundation step according to the method that image is handled of being used for of claim 6.
10, according to the method that image is handled of being used for of claim 9, wherein the temporal resolution foundation step comprises:
The classification determining step, be used for determining the more motion vector of the object pixel of the image of high time resolution that has that to create by utilizing at the detected motion vector of motion vector detection step, from motion blurring reduction image, extract with the corresponding a plurality of pixels of object pixel as the classification tap, and determine and the corresponding classification of object pixel according to the pixel value of classification tap;
Storing step, be used between first image and second image, learning following predictive coefficient so that the predictive coefficient that is used for each classification that is produced is stored for each classification, each all is used for described predictive coefficient coming the target of prediction pixel according to a plurality of pixels of first image, described first image has and the corresponding temporal resolution of motion blurring reduction image, described second image has the temporal resolution higher than first image, and the described a plurality of pixels in first image are corresponding with the object pixel in second image; And
Predicted value produces step, be used for detecting and the corresponding predictive coefficient of the determined classification of classification determining step from storing step, from motion blurring reduction image, extract with the image that will produce in the corresponding a plurality of pixels of object pixel as prediction tapped, and make up according to the one-dimensional linear of detected predictive coefficient and prediction tapped from storing step and to produce and the corresponding predicted value of object pixel.
11, a kind of program that makes computer carry out following step:
The motion vector detection step is used for by utilizing to be come motion vector is detected by the image of being made up of a plurality of pixel that imageing sensor obtained with time integral effect;
The temporal resolution foundation step is used for having than the image of being made up of a plurality of pixel image of high time resolution more by utilizing the motion vector that detected in the motion vector detection step and being produced by the image that a plurality of pixel is formed; And
The image of motion blur mitigation produces step, the pixel value of the motion object pixels in the supposition image is by carrying out under the situation of the value that integration obtained at the pixel value to following each pixel on the time orientation, wherein said each pixel does not wherein exist when the motion object motion and the corresponding motion blur of motion object, and this step is used for wherein having alleviated the fuzzy motion blurring reduction image of motion motion of objects by utilizing the motion vector that is detected in the motion vector detection step to produce.
CNB2005800001376A 2004-02-13 2005-02-10 Image processing apparatus, image processing method and program Expired - Fee Related CN100423557C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP037250/2004 2004-02-13
JP2004037249 2004-02-13
JP037249/2004 2004-02-13

Publications (2)

Publication Number Publication Date
CN1765123A true CN1765123A (en) 2006-04-26
CN100423557C CN100423557C (en) 2008-10-01

Family

ID=36748361

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2005800001376A Expired - Fee Related CN100423557C (en) 2004-02-13 2005-02-10 Image processing apparatus, image processing method and program

Country Status (1)

Country Link
CN (1) CN100423557C (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101543053B (en) * 2007-02-07 2011-07-06 索尼株式会社 Image processing device, image picking-up device, and image processing method
CN101848343B (en) * 2009-03-24 2013-04-17 财团法人工业技术研究院 Image sensor with integral image output
WO2021056220A1 (en) * 2019-09-24 2021-04-01 北京大学 Video coding and decoding method and apparatus

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4483085B2 (en) * 2000-12-25 2010-06-16 ソニー株式会社 Learning device, application device, learning method, and application method
JP4596217B2 (en) * 2001-06-22 2010-12-08 ソニー株式会社 Image processing apparatus and method, recording medium, and program
JP4840630B2 (en) * 2001-06-27 2011-12-21 ソニー株式会社 Image processing apparatus and method, recording medium, and program
JP4596227B2 (en) * 2001-06-27 2010-12-08 ソニー株式会社 COMMUNICATION DEVICE AND METHOD, COMMUNICATION SYSTEM, RECORDING MEDIUM, AND PROGRAM

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101543053B (en) * 2007-02-07 2011-07-06 索尼株式会社 Image processing device, image picking-up device, and image processing method
CN101848343B (en) * 2009-03-24 2013-04-17 财团法人工业技术研究院 Image sensor with integral image output
WO2021056220A1 (en) * 2019-09-24 2021-04-01 北京大学 Video coding and decoding method and apparatus

Also Published As

Publication number Publication date
CN100423557C (en) 2008-10-01

Similar Documents

Publication Publication Date Title
CN1765124A (en) Image processing device, image processing method, and program
CN1160967C (en) Picture encoder and picture decoder
CN1208970C (en) Image processing apparatus
CN1258909C (en) Moving picture synthesizer
CN1138422C (en) Process for interpolating progressive frames
CN1138420C (en) Image processor, image data processor and variable length encoder/decoder
CN1960496A (en) Motion vector estimation apparatus
CN1248162C (en) Image processing apparatus and method and image pickup apparatus
CN1237488C (en) Image processing apparatus and method and image pickup apparatus
CN1947152A (en) Image processing apparatus and method, and recording medium and program
CN1272286A (en) Block noise detector and block noise eliminator
CN1123230C (en) Picture coding, decoding apparatus and program record medium
CN1220390C (en) Image processing equipment, image processing program and method
CN1168322C (en) Image coding/decoding method and recorded medium on which program is recorded
CN1216199A (en) Digital image replenishment method, image processing device and data recording medium
CN1406438A (en) Information signal processing device, information signal processing method, image signal processing device and image display device using it, coefficient type data creating device used
CN1726529A (en) Image signal processing apparatus, image signal processing method, program for practicing that method, and computer-readable medium in which that program has been recorded
CN1249629C (en) Image processor
CN1765123A (en) Image processing apparatus, image processing method and program
CN1233151C (en) Image processing device and method, and imaging device
CN1224256C (en) Information signal processing device, information signal processing method, image signal processing device, image signal processing method and image displaying method using it, coeficient kind data
CN1479253A (en) Image processing device, computer program product and image processing method
CN1293757C (en) Device and method for data conversion, device and method for learning, and recording medium
CN1628465A (en) Motion vector detection device and motion vector detection method
CN1787606A (en) Method and apparatus for processing image, recording medium and computer program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20081001

Termination date: 20140210