CN101090456A - Image processing device and method, image pickup device and method - Google Patents

Image processing device and method, image pickup device and method Download PDF

Info

Publication number
CN101090456A
CN101090456A CNA2007101086584A CN200710108658A CN101090456A CN 101090456 A CN101090456 A CN 101090456A CN A2007101086584 A CNA2007101086584 A CN A2007101086584A CN 200710108658 A CN200710108658 A CN 200710108658A CN 101090456 A CN101090456 A CN 101090456A
Authority
CN
China
Prior art keywords
motion vector
rotation
image
block
block motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2007101086584A
Other languages
Chinese (zh)
Other versions
CN101090456B (en
Inventor
仓田徹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of CN101090456A publication Critical patent/CN101090456A/en
Application granted granted Critical
Publication of CN101090456B publication Critical patent/CN101090456B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/43Hardware specially adapted for motion estimation or compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/527Global motion vector estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/533Motion estimation using multistep search, e.g. 2D-log search or one-at-a-time search [OTS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/537Motion estimation other than block-based
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6811Motion detection based on the image signal

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Herein disclosed an image processing device including: per-block motion vector calculating means for calculating a motion vector between two pictures of an image input in picture units sequentially, performing block matching in each of divided regions obtained by dividing one picture into a plurality of regions, and calculating a per-block motion vector for each of said divided regions; translation amount calculating means for calculating an amount of translation of the other of said two pictures with respect to one of said two pictures from a plurality of said per-block motion vectors calculated by said per-block motion vector calculating means; rotation angle calculating means for calculating a rotation angle of the other of said two pictures with respect to one of said two pictures from the plurality of said per-block motion vectors calculated by said per-block motion vector calculating means; and rotation and translation adding means for superimposing a plurality of pictures on each other using the amount of translation calculated by said translation amount calculating means and the rotation angle calculated by said rotation angle calculating means.

Description

Image processing apparatus and method, image pick-up device and method
The cross reference of related application
The present invention comprises and is involved in the theme that on June 14th, 2006 was submitted to the Japanese patent application JP 2006-164209 of Japan Patent office, and its full content is hereby expressly incorporated by reference.
Technical field
The present invention relates to image processing apparatus, image processing method, image pick-up device and image pickup method, it can be proofreaied and correct the so-called hand that comprises in the image information that obtains by image pickup and tremble component (hand movement component) in such as the image pick-up device of digital camera and video camera, thereby obtains not exist hand to tremble the image of component.
Background technology
Usually, when by carrying out such as the hand-held image pick-up device of digital camera, video camera etc. when taking, because the hand when taking is trembled the vibration that the vibration of caused image pick-up device shows as the captured image picture unit.
As being used to proofread and correct the method for trembling the captured image vibration that causes owing to this hand, by the size that reduces cost, improves performance and reduce gyrosensor, make the optics hand that uses gyro (angular speed) transducer tremble corrective system and in recent market, occupy main flow.
Yet, in the past few years, because the rapid increase of popularizing the while pixel quantity rapidly of digital camera has produced new problem.This problem is: although need hand to tremble correction equally strongly for the rest image of low-light (level) when (long time for exposure), but only exist to use solution such as the transducer of gyrosensor etc., expose the weakness and the other problem (for example, the low detection accuracy of gyrosensor self) of gyrosensor.
The hand that is used for rest image in the device that commercially available in the market consumer uses is trembled to proofread and correct and is all used gyrosensors or acceleration transducer to measure hand to tremble vector, and hand is trembled vector feed back to train of mechanism carrying out High-speed Control, thereby prevent to project image blurring such as on the imageing sensor of CCD (charge coupled device) imager, CMOS (complementary metal oxide semiconductors (CMOS)) imager etc.
Here the train of mechanism that is proposed is meant lens, prism or the imager module of imager (or be integrated with), and the control of lens, prism or imager is meant that respectively lens move, prism moves or imager moves.
Tremble correction as long as carry out hand by this method, just can not carry out the correction of pixel precision fully, this is because the trueness error of the above-mentioned gyrosensor that not only added up itself, and added up, or be used to avoid the predicated error of feedback delay and the departure of train of mechanism to the delay of train of mechanism feedback.
Have the serious problems that can not improve precision on the correction principle though under current situation, use the hand of transducer to tremble as mentioned above, but the market left-hand seat is trembled correction and has also been obtained favorable comment, though this is can not proofread and correct hand fully and tremble and can reduce hand and tremble because hand is trembled correction.
Yet along with expecting significantly increasing of pixel count and reducing of Pixel Dimensions from now on, it is matter of time that market awareness must broaden day by day to the breach between correction boundary and the pixel precision.
On the other hand, as being used to proofread and correct other method of trembling caused captured image vibration owing to hand, known a kind of no transducer hand-shake correction method, it calculates the motion vector of the picture unit of captured image, and, tremble correction thereby carry out hand based on the position of reading of the captured image data of this motion vector mobile storage in video memory.
As the method for motion vector that is used for detecting the picture unit of captured image, be used for determining that the piece coupling of correlation between the captured image of two pictures is known from captured image information itself.Use the no transducer hand-shake correction method of this coupling also to have to detect hand by the pixel precision that is included in the rotational component on the roller bearing direction on the principle and tremble vector, thereby and owing to eliminated and to reduce the size of image pick-up device and the advantage of weight needs such as the mechanical part of gyrosensor etc.
Figure 71 and Figure 72 show schematically showing of piece coupling skeleton diagram.Figure 73 is the general example of the flow chart of piece matching process.
For example, the piece coupling is a kind of like this method, wherein, by calculate in the piece of the rectangular area of pre-sizing as pay close attention to picture from the reference picture of the captured image of image pick-up device unit with as the correlation between the raw frames of the captured image picture of the previous picture of reference picture elder generation, calculate the vector motion that is unit with a picture between reference picture and the raw frames.
Incidentally, in this manual for convenience, suppose that picture forms by a frame, and picture is called frame by a frame or the formed image of field picture data though picture in this case is meant.Therefore, reference picture is called reference frame, and raw frames is called primitive frame (target frame).
For example, the view data of reference frame is the view data from the present frame of image pick-up device unit, or also this view data is postponed the view data that a frame is obtained by the image data storage with present frame in frame memory.The view data of primitive frame is further to be stored in the frame memory by the view data with reference frame also this view data to be postponed the view data that a frame is obtained.
Shown in Figure 71, in piece coupling, the object block 103 that is formed by the rectangular area of the preliminary dimension that comprises a plurality of pixels on the horizontal direction and many lines on the vertical direction is set at any pre-position in the primitive frame 101.
On the other hand, in reference frame 102, the projection image block 104 of hypothetical target piece (referring to the dotted line among Figure 71) is arranged in the position identical with primitive frame object block 103, by the projection image block 104 of object block is provided with hunting zone 105 (dotted line that replaces referring to the length among Figure 71) as the center, and think that reference block 106 has the size identical with object block 103.
Then, reference block 106 is moved to the position in the hunting zone 105 in the reference frame 102.Determine that each position is included in the correlation between the picture material of picture material in the reference block 106 and object block 103.With the position finding of the reference block 106 at the strong position place of correlation is position in the reference frame 102 that object block 103 is moved in the primitive frame.Then, the detection position of reference block 106 and the position offset between the target block positions are determined as motion vector, as the amount that comprises durection component.
In this case, for example, be unit with a pixel on horizontal direction or the vertical direction or a plurality of pixel, mobile reference block 106 in hunting zone 105.Therefore, a plurality of reference blocks are set in hunting zone 105.
The summation of the absolute value of difference between the brightness value of respective pixel in brightness value by obtaining all pixels in the object block 103 and the reference block 106 (summation of the absolute value of difference will be known as difference absolute value with, and hereinafter with difference absolute value and be described as SAD (absolute difference and) value), detect object block 103 and the correlation between the mobile reference block 106 in hunting zone 105.That is, the reference block 106 in the position of minimum sad value is measured as the reference block with strongest correlation, and will be determined as motion vector with respect to the position offset of the detection reference block 106 of the position of object block 103.
In the piece coupling, by represent to be arranged on each the position offset in a plurality of reference blocks 106 in the hunting zone 105 as the reference vector 107 (referring to Figure 71) of the amount that comprises durection component with respect to the position of object block 103.The reference vector 107 of each reference block 106 all has the value corresponding to reference block 106 positions in the reference frame 102.In existing coupling, be measured as motion vector corresponding to object block 103 from the reference vector of the reference block 106 that wherein obtains minimum sad value.
Usually, shown in Figure 72, in the piece coupling, with the sad value between each and the object block 103 in a plurality of corresponding reference block 106 that is arranged in the hunting zone 105 (in order to simplify description, sad value is known as the sad value of reference block hereinafter) store into corresponding to corresponding reference vector 107 and store in the memory, wherein, reference vector 107 is corresponding to the position of corresponding reference block 106 in the hunting zone 105.Detect the reference block with minimum sad value 106 in the sad value that sad value is stored in all reference blocks 106 in the memory.Thereby, detect motion vector 110 corresponding to object block 103.
The form of storing the sad value of each reference block 106 corresponding to each reference vector 107 is called as difference absolute value and table (hereinafter being called the SAD table), wherein, each reference vector 107 is corresponding to the position that is arranged on a plurality of reference blocks 106 in the hunting zone 105.SAD table 108 among Figure 72 shows this form.The sad value of each reference block 106 is known as SAD table element 109 in the SAD table 108.
Incidentally, in the superincumbent description, the position of object block 103 and reference block 106 is meant any given position, for example, and the center of piece.Side-play amount (comprising direction) in the position of the projection image block 104 of reference vector 107 expression object block 103 and the reference frame 102 between the position of reference block 106.In the example of Figure 71 and Figure 72, object block 103 is positioned at the center of frame.
Corresponding to each reference blocks 106 of reference vector 107 expression of each reference block 106 with respect to reference frame 102 in the offset of object block 103 corresponding positions.Therefore, when the position of designated reference piece 106, also specified value corresponding to the reference vector of this position.Therefore, during the address of the SAD table element of the reference block in specifying SAD table 108 memory, just specified the corresponding reference vector.
Below, with reference to the processing of the above-mentioned existing piece coupling of the flow chart description of Figure 73.
At first, a reference block Ii in the specified search range 105.This is equivalent to specify the reference vector (step S1) corresponding to reference block Ii.In Figure 73, when the position of object block was set to reference position (0,0) in frame, (vx, vy) expression was by the represented position of designated reference vector.Vx is based on the component of the side-play amount of the designated reference vector of reference position on the horizontal direction.Vy is based on the component of the side-play amount of the designated reference vector of reference position on the vertical direction.
In this case, side-play amount vx and vy are to be the value of unit with the pixel.For example, vx=+1 represents with respect to reference position (0, the 0) locations of pixels that moves right in the horizontal direction.Vx=-1 represents to be moved to the left a locations of pixels in the horizontal direction with respect to reference position (0,0).For example, vy=+1 represents to move down a locations of pixels in vertical direction with respect to reference position (0,0).Vy=-1 represents with respect to reference position (0, the 0) locations of pixels that moves up in vertical direction.
As mentioned above, (vx, vy) expression with respect to the reference position by the represented position of reference vector (for the sake of simplicity, being called the position of representing by reference vector hereinafter), and corresponding with each reference vector.That is, suppose that vx and vy are integers, (vx vy) represents each reference vector.Therefore, in the following description, can with locative reference vector (vx, vy) be described as reference vector (vx, vy).
Be set to the position of object block by the center of hunting zone, that is, reference position (0,0), by ± Rx limit search scope, and during in vertical direction by ± Ry limit search scope, the hunting zone is represented as when in the horizontal direction:
-Rx≤vx≤+Rx, -Ry≤vy≤+Ry
Next, coordinate (x, y) (the step S2) of a pixel in the intended target piece Io.Then, calculate specified coordinate in the object block Io (x, the pixel value Io that y) locates (x, y) with reference block Ii in the respective pixel position pixel value Ii (x+vx, y+vy) between the absolute value α (step S3) of difference.That is, calculate difference absolute value α by following formula
α=| Io (x, y)-Ii (x+vx, y+vy) | ... (equation 1)
Then, with the difference absolute value α that calculates with by the reference vector of reference block Ii (vx, vy) Biao Shi address (table element) previous sad value addition of locating, and will write back to this address (step S4) as the sad value of addition result.That is, when will corresponding to reference vector (vx, sad value vy) be expressed as SAD (vx, in the time of vy), by the following formula sad value:
SAD(vx,vy)=Σα=Σ|Io(x,y)-Ii(x+vx,y+vy)|
... (equation 2)
Then, sad value is write by reference vector (vx, vy) Biao Shi address.
Next, determine whether that (x, the pixel of y) locating has been carried out aforesaid operations (step S5) to all coordinates in the object block Io.(x, when the pixel of y) locating was finished this operation, process turned back to step S2 to all coordinates in determining also not to object block Io, with next coordinate in the intended target piece Io (x, the location of pixels of y) locating, and the following processing of repeating step S2.
(x when the pixel of y) locating has been carried out aforesaid operations, has determined to finish the calculating of above-mentioned sad value to reference block to all coordinates in determining to object block Io in step S5.Then, determine whether to have finished to (that is aforesaid operations of all reference vectors (vx, vy)) in the hunting zone processing (step S6), of all reference blocks.
When in step S6, determine to exist the reference vector also not finishing aforesaid operations and handle (vx in the time of vy), handles turning back to step S1, be provided with next reference vector of not finishing aforesaid operations and handling (vx, vy), and the following processing of repeating step S1.
Then, (vx in the time of vy), determines to have finished the SAD table when determine not exist in the hunting zone reference vector of not finishing aforesaid operations and handling in step S6.In the SAD table of finishing, detect minimum sad value (step S7).Then, be measured as motion vector (step S8) corresponding to the reference vector of the address of minimum sad value corresponding to object block Io.When minimum sad value write SAD (mx in the time of my), calculates the motion vector of expection, as the expression position (mx, vector my) (mx, my).
So, finish by the process of piece matching detection corresponding to the motion vector of an object block.
In fact, be difficult to from trembling vector with respect to the high accuracy hand of primitive frame corresponding to obtaining reference frame the motion vector of an object block.Therefore, in primitive frame, a plurality of object block are set, to cover the gamut of primitive frame.On the other hand, shown in Figure 74, in reference frame, to the projection image 104,104 of a plurality of object block ... be provided with respectively hunting zone 105,105 ..., and in each hunting zone, detect motion vector 110,110 corresponding to object block ....
Then, from the motion vector 110,110 of a plurality of detections ... middle detection is trembled vector (overall motion vector) with respect to the hand of the reference frame of primitive frame.
Detect the main method that hand is trembled vector (overall motion vector) as being used for from a plurality of motion vectors 110, proposed to carry out the method for most decisions based on a plurality of motion vectors, that is, the maximum number of direction and the mutually the same motion vector of size is set to overall motion vector in a plurality of motion vectors 110.In addition, a kind of method that majority decision is combined with reliability assessment based on the change amount (frequency) of motion vector on the time-axis direction has been proposed.
With patent documentation 1 (Japan Patent discloses 2003-78807 number) is that the great majority of the prior art of representative do not have the transducer hand to tremble correction all be to be target with the moving image.Seldom propose at the method that is used to realize the no transducer hand of rest image is trembled correction, comprise patent documentation 2 (Japan Patent discloses Hei 7-283999 number).This patent documentation 2 is algorithms of taking rest image in the short time for exposure continuously, so that not producing hand, it does not tremble component, the hand that obtains between the rest image is trembled vector, a plurality of rest image additions of taking continuously in the time of will trembling vector and move rest image according to hand, and final the acquisition do not have hand to tremble the high image quality of component and low-light (level) noise (high-resolution) rest image.
Patent documentation 3 (Japan Patent discloses 2005-38396 number) can be counted as being in the real scheme of feasible level.Disclosed device comprises and is used to obtain because image dwindles the device of the caused motion vector for size of conversion in the patent documentation 3, and the device that is used for sharing same SAD table between a plurality of.Dwindling conversion and share the SAD table between a plurality of of image is the extraordinary method that realizes that the SAD table size reduces, and be used to other field, for example, motion vector detection and the scene change-detection in MPEG (Motion Picture Experts Group) image compression system.
Yet, there is following problem in the algorithm of patent documentation 3: image dwindle conversion and the memory when dwindling conversion (DRAM (dynamic ram (random access memory))) visit elapsed time and storage space, and owing to a plurality of the SAD table is carried out the time-division visit, so memory access rolls up and this handles also holding time.Trembling correction for the hand of moving image needs reducing of real-time performance and system delay time, and therefore, the processing time becomes problem.
In addition, original image dwindle the low pass filter that conversion need realize being used to remove aliasing (aliasing distortion) and low-light (level) noise, as the preliminary treatment of dwindling processing.Yet the characteristic of low pass filter changes according to the reduce in scale factor, especially when low pass filter is many tap number character filter in vertical direction, needs a plurality of line memories and operation logic, therefore, the problem that circuit scale increases occurs.
Summary of the invention
Tremble corrective system at the hand that is used for moving image, the rough hand that expectation focuses on processing time rather than precision is trembled vector and is detected in real time, also can provide satisfied result even tremble detection method according to the no transducer hand of prior art under most occasions.
On the other hand, the prior art that the hand that is used for rest image is trembled corrective system is the just motion on the idea usually, and does not suppose that pixel quantity is 10,000,000 grades of today usually.Therefore, do not consider the rotational component that hand is trembled,, but need for example a large amount of calculating yet even perhaps considered the rotational component that hand is trembled.Therefore, lacking with the existing mobile device such as digital camera etc. is the realistic consideration of target.
Yet, as mentioned above, can expect in the future will to improve day by day, and need higher performance such as the picture element density of the image pick-up device of digital camera etc.In this case, it is very great to realize not using the no transducer hand of gyro (angular speed) transducer to tremble the meaning of correction when taking rest image.
Therefore, as mentioned above, wish to use the piece coupling to calculate hand and tremble motion vector, and use detected motion vector to carry out hand and tremble correction based on no transducer.In addition, it is very important addressing the above problem.
In view of the foregoing, expectation provides a kind of image process method and device of being used for, and it can solve above-mentioned existing no transducer hand and tremble the problem of corrective system, and high resolution image is provided.
According to embodiments of the invention, a kind of image processing apparatus is provided, comprise: every (per-block) motion vector calculation device, being used to calculate with the picture is motion vector between two pictures of image of unit sequence input, by a picture being divided into execution block coupling in each zoning that a plurality of zone obtains, and calculate every block motion vector of each zoning; The translational movement calculation element is used for a plurality of every block motion vector of calculating according to by every block motion vector calculation element, with respect to one in two pictures translational movement that calculates in two pictures another; Anglec of rotation calculation element is used for a plurality of every block motion vector of calculating according to by every block motion vector calculation element, with respect to one in two pictures anglec of rotation of calculating in two pictures another; And rotate peaceful phase shift feeder apparatus, be used to use the translational movement that calculates by the translational movement calculation element and the anglec of rotation calculated by anglec of rotation calculation element with a plurality of picture mutual superposition.
In the image processing apparatus of previous embodiment, calculate translational movement and the anglec of rotation of reference picture with respect to raw frames according to a plurality of every block motion vector that calculates by every block motion vector calculation element according to the present invention.Then, use the translational movement of calculating and the anglec of rotation of calculating, order stack adds a plurality of pictures.When image is for example during captured image, be therefrom to have removed the high resolution image that hand is trembled component as the image of stack result.
The image processing apparatus of previous embodiment further comprises according to the present invention: the overall motion vector calculation element is used for respect to two pictures one and calculates in two pictures another whole overall motion vectors; And apparatus for evaluating, be used to use overall motion vector, assess in a plurality of every block motion vector that obtains by every block motion vector calculation element each; Wherein, by apparatus for evaluating provide high assessed value every block motion vector quantity less than in two pictures of predetermined threshold another by from by the rotation peaceful phase shift feeder apparatus mutual superposition picture removed.
According to previous embodiment of the present invention, use reference picture to assess in a plurality of every block motion vectors each, and from the picture that superposes each other by the peaceful phase shift feeder apparatus of rotation, remove the reference picture of the numerical value of the every block motion vector that provides high assessed value less than the low reliability of predetermined threshold with respect to whole overall motion vectors of raw frames.
Therefore, by rotating only superpose the each other reference picture of high reliability of peaceful phase shift feeder apparatus.Can expect to obtain not have hand to tremble the high resolution image of component.
The image processing apparatus of previous embodiment further comprises according to the present invention: the overall motion vector calculation element is used for the whole overall motion vectors that calculate in two pictures another with respect to of two pictures; And apparatus for evaluating, be used for using each of a plurality of every block motion vectors that the overall motion vector assessment obtains by every block motion vector calculation element; Wherein, translational movement calculation element and anglec of rotation calculation element only calculate the translational movement and the anglec of rotation according to a plurality of every block motion vector that is provided high assessed value by apparatus for evaluating.
According to previous embodiment of the present invention, only use every block motion vector of high reliability in the every block motion vector that calculates by every block motion vector calculation element to calculate the translational movement and the anglec of rotation, thereby calculated the accurate translational movement and the accurate anglec of rotation.
Therefore, rotating peaceful phase shift feeder apparatus uses the accurate translational movement and the accurate anglec of rotation that reference picture is superposeed each other.Can expect to obtain not have hand to tremble the high resolution image of component.
In previous embodiment of the present invention, calculate translational movement and the anglec of rotation of reference picture according to a plurality of every block motion vector of reference picture with respect to raw frames, calculate every block motion vector by every block motion vector calculation element.Use the translational movement of calculating and the anglec of rotation of calculating, can order stack add a plurality of pictures.For example, when image is captured image, be therefrom to remove the high resolution image that hand is trembled component as the image of stack result.
Description of drawings
Fig. 1 shows the block diagram according to the structure example of first embodiment of image processing apparatus of the present invention;
Fig. 2 is the diagrammatic sketch of aid illustration according to image processing method embodiment summary of the present invention;
Fig. 3 is the diagrammatic sketch of aid illustration according to image processing method embodiment summary of the present invention;
Fig. 4 is the diagrammatic sketch of aid illustration according to image processing method embodiment summary of the present invention;
Fig. 5 is that aid illustration is calculated frame in image processing method embodiment according to the present invention hand is trembled the diagrammatic sketch of the process of translational component;
Fig. 6 is that aid illustration is calculated frame in image processing method embodiment according to the present invention hand is trembled the diagrammatic sketch of the process of translational component;
Fig. 7 A, Fig. 7 B, Fig. 7 C and Fig. 7 D are that aid illustration is calculated frame in image processing method embodiment according to the present invention hand is trembled the diagrammatic sketch of the process of rotational component;
Fig. 8 (A), Fig. 8 (B), Fig. 8 (C), Fig. 8 (D) and Fig. 8 (E) are that aid illustration is calculated frame in image processing method embodiment according to the present invention hand is trembled the diagrammatic sketch of the process of rotational component;
Fig. 9 A and Fig. 9 B are the diagrammatic sketch of aid illustration according to image processing method embodiment summary of the present invention;
Figure 10 is the diagrammatic sketch of aid illustration according to image processing method embodiment summary of the present invention;
Figure 11 is the flow chart of aid illustration according to image processing method embodiment summary of the present invention;
Figure 12 A and Figure 12 B are aid illustration is calculated the processing example of the every block motion vector in a plurality of stages in the embodiment of the image processing method according to the present invention diagrammatic sketch;
Figure 13 is aid illustration is calculated the processing example of every block motion vector in an image processing method embodiment according to the present invention diagrammatic sketch;
Figure 14 A and Figure 14 B are aid illustration is calculated the processing example of every block motion vector in image processing method embodiment according to the present invention diagrammatic sketch;
Figure 15 is aid illustration is calculated the processing example of every block motion vector in an image processing method embodiment according to the present invention diagrammatic sketch;
Figure 16 A and Figure 16 B are aid illustration is calculated the processing example of every block motion vector in image processing method embodiment according to the present invention diagrammatic sketch;
Figure 17 is aid illustration is calculated the processing example of every block motion vector in an image processing method embodiment according to the present invention diagrammatic sketch;
Figure 18 is aid illustration is calculated the processing example of every block motion vector in an image processing method embodiment according to the present invention diagrammatic sketch;
Figure 19 is aid illustration is calculated the processing example of every block motion vector in an image processing method embodiment according to the present invention diagrammatic sketch;
Figure 20 A and Figure 20 B are aid illustration is calculated the processing example of every block motion vector in image processing method embodiment according to the present invention diagrammatic sketch;
Figure 21 is aid illustration is calculated the processing example of every block motion vector in an image processing method embodiment according to the present invention diagrammatic sketch;
Figure 22 is aid illustration is calculated the processing example of every block motion vector in an image processing method embodiment according to the present invention diagrammatic sketch;
Figure 23 A and Figure 23 B are aid illustration is calculated the processing example of every block motion vector in image processing method embodiment according to the present invention diagrammatic sketch;
Figure 24 A, Figure 24 B, Figure 24 C and Figure 24 D are aid illustration is calculated the processing example of every block motion vector in image processing method embodiment according to the present invention diagrammatic sketch;
Figure 25 is aid illustration is calculated the processing example of every block motion vector in an image processing method embodiment according to the present invention diagrammatic sketch;
Figure 26 is aid illustration is calculated the processing example of every block motion vector in an image processing method embodiment according to the present invention diagrammatic sketch;
Figure 27 A and Figure 27 B are aid illustration is calculated the processing example of every block motion vector in image processing method embodiment according to the present invention diagrammatic sketch;
Figure 28 is aid illustration is calculated the processing example of every block motion vector in an image processing method embodiment according to the present invention diagrammatic sketch;
Figure 29 is aid illustration is calculated the processing example of every block motion vector in an image processing method embodiment according to the present invention diagrammatic sketch;
Figure 30 A, Figure 30 B, Figure 30 C and Figure 30 D are aid illustration is calculated the processing example of every block motion vector in image processing method embodiment according to the present invention diagrammatic sketch;
Figure 31 is the diagrammatic sketch of the handling property of the aid illustration processing example that calculates every block motion vector in image processing method embodiment according to the present invention;
Figure 32 is the diagrammatic sketch of aid illustration according to image processing method embodiment summary of the present invention;
Figure 33 is by comparing the diagrammatic sketch of aid illustration according to the feature of image processing method embodiment of the present invention with existing method;
Figure 34 is by comparing the diagrammatic sketch of aid illustration according to the feature of image processing method embodiment of the present invention with existing method;
Figure 35 is by comparing the diagrammatic sketch of aid illustration according to the feature of image processing method embodiment of the present invention with existing method;
Figure 36 is aid illustration detects the processing example of translational component that hand trembles and rotational component in according to first embodiment of an image processing apparatus of the present invention part flow chart;
Figure 37 is aid illustration detects the processing example of translational component that hand trembles and rotational component in according to first embodiment of an image processing apparatus of the present invention part flow chart;
Figure 38 is aid illustration detects the processing example of translational component that hand trembles and rotational component in according to first embodiment of an image processing apparatus of the present invention part flow chart;
Figure 39 is aid illustration detects the processing example of translational component that hand trembles and rotational component in according to first embodiment of an image processing apparatus of the present invention part flow chart;
Figure 40 be aid illustration in according to first embodiment of image processing apparatus of the present invention, detect translational component that hand trembles and rotational component another handle the part flow chart of example;
Figure 41 be aid illustration in according to first embodiment of image processing apparatus of the present invention, detect translational component that hand trembles and rotational component other handle the part flow chart of example;
Figure 42 be aid illustration in according to first embodiment of image processing apparatus of the present invention, detect translational component that hand trembles and rotational component other handle the part flow chart of example;
Figure 43 be aid illustration in according to first embodiment of image processing apparatus of the present invention, detect translational component that hand trembles and rotational component other handle the diagrammatic sketch of example;
Figure 44 is aid illustration detects first example of handling according to the every block motion vector among first embodiment of an image processing apparatus of the present invention part flow chart;
Figure 45 is aid illustration detects first example of handling according to the every block motion vector among first embodiment of an image processing apparatus of the present invention part flow chart;
Figure 46 is aid illustration detects second example of handling according to the every block motion vector among first embodiment of an image processing apparatus of the present invention part flow chart;
Figure 47 is aid illustration detects second example of handling according to the every block motion vector among first embodiment of an image processing apparatus of the present invention part flow chart;
Figure 48 is aid illustration detects the 3rd example of handling according to the every block motion vector among first embodiment of an image processing apparatus of the present invention part flow chart;
Figure 49 is aid illustration detects the 3rd example of handling according to the every block motion vector among first embodiment of an image processing apparatus of the present invention part flow chart;
Figure 50 is aid illustration detects the 3rd example of handling according to the every block motion vector among first embodiment of an image processing apparatus of the present invention part flow chart;
Figure 51 is aid illustration detects the 3rd example of handling according to the every block motion vector among first embodiment of an image processing apparatus of the present invention part flow chart;
Figure 52 is aid illustration detects the 3rd example of handling according to the every block motion vector among first embodiment of an image processing apparatus of the present invention diagrammatic sketch;
Figure 53 illustrates the block diagram that adds the structure example of unit 19 according to the peaceful phase shift of the rotation among first embodiment of image processing apparatus of the present invention;
Figure 54 is aid illustration adds the structure example of unit 19 according to the peaceful phase shift of the rotation among first embodiment of an image processing apparatus of the present invention diagrammatic sketch;
Figure 55 is aid illustration adds the processing example of unit 19 according to the peaceful phase shift of the rotation among first embodiment of an image processing apparatus of the present invention flow chart;
Figure 56 illustrates the block diagram that adds the structure example of unit 19 according to the peaceful phase shift of the rotation among first embodiment of image processing apparatus of the present invention;
Figure 57 is aid illustration adds the structure example of unit 19 according to the peaceful phase shift of the rotation among first embodiment of an image processing apparatus of the present invention diagrammatic sketch;
Figure 58 is aid illustration adds the processing example of unit 19 according to the peaceful phase shift of the rotation among first embodiment of an image processing apparatus of the present invention flow chart;
Figure 59 illustrates the block diagram that adds the structure example of unit 19 according to the peaceful phase shift of the rotation among first embodiment of image processing apparatus of the present invention;
Figure 60 is aid illustration adds the structure example of unit 19 according to the peaceful phase shift of the rotation among first embodiment of an image processing apparatus of the present invention diagrammatic sketch;
Figure 61 is aid illustration adds the structure example of unit 19 according to the peaceful phase shift of the rotation among first embodiment of an image processing apparatus of the present invention diagrammatic sketch;
Figure 62 is aid illustration adds the processing example of unit 19 according to the peaceful phase shift of the rotation among first embodiment of an image processing apparatus of the present invention part flow chart;
Figure 63 is aid illustration adds the processing example of unit 19 according to the peaceful phase shift of the rotation among first embodiment of an image processing apparatus of the present invention part flow chart;
Figure 64 is the block diagram that illustrates according to the structure example of second embodiment of image processing apparatus of the present invention;
Figure 65 is that aid illustration detects the diagrammatic sketch of handling according to the every block motion vector among second embodiment of image processing apparatus of the present invention;
Figure 66 is that aid illustration detects the diagrammatic sketch of handling according to the every block motion vector among second embodiment of image processing apparatus of the present invention;
Figure 67 is that aid illustration detects the part flow chart of handling example according to the every block motion vector among second embodiment of image processing apparatus of the present invention;
Figure 68 is that aid illustration detects the part flow chart of handling example according to the every block motion vector among second embodiment of image processing apparatus of the present invention;
Figure 69 is the block diagram that illustrates according to the structure example of the 3rd embodiment of image processing apparatus of the present invention;
Figure 70 is the diagrammatic sketch of aid illustration according to another example of image processing method of the present invention;
Figure 71 is the diagrammatic sketch of aid illustration by the process of piece coupling calculating kinematical vector;
Figure 72 is the diagrammatic sketch of aid illustration by the process of piece coupling calculating kinematical vector;
Figure 73 is the diagrammatic sketch of aid illustration by the process of piece coupling calculating kinematical vector; And
Figure 74 is the diagrammatic sketch of aid illustration by the process of piece coupling calculating kinematical vector.
Embodiment
Hereinafter, by being example, describe preferred embodiment with reference to the accompanying drawings according to image processing method of the present invention and image processing apparatus so that image pick-up device and image pickup method will be applied to according to the embodiment of image processing method of the present invention and image processing apparatus.
[according to the summary of image processing method embodiment of the present invention]
Below with among the embodiment that describes, the present invention is applied to be primarily aimed at the hand of rest image and trembles corrective system.
This embodiment input picture frame is set to reference frame, and detect the input picture frame with prior to the motion vector between the primitive frame (primitive frame that for example, postpones a frame) of input picture frame.Then, tremble timing when carrying out hand, by order each other a plurality of images (for example, 3-fps image) of taking of stack carry out in the present embodiment hand and tremble correction at rest image.
Therefore, tremble timing when the rest image of taking is carried out hand, the present embodiment order stack adds the image of a plurality of shootings, thereby the precision near pixel precision (pixel precision) is provided.Present embodiment is the translational component on horizontal direction and the vertical direction between the detection block not only, and also the rotational component between the detection block is trembled motion vector as hand, and after translation and rotation transportable frame a plurality of frames is superposeed each other.
Should be noted that below the embodiment that describes is not limited to be used for rest image, and also can be used for moving image in essence.Under the situation of moving image,, will describe after a while because therefore real-time, exists the quantity upper limit (frame number that will superpose) with the addition frame.Yet by each frame being used the method for present embodiment, present embodiment can be applied to identical device to generate the system of the moving image that is produced by high-grade noise reduction.
In addition,, as mentioned above, state the piece coupling in the use and calculate in the process of the motion vector between two frames, a plurality of object block are set in primitive frame below with among the embodiment that describes, and to each the execution block coupling in a plurality of object block.
For example, as shown in Figure 2, below the present embodiment of describing is provided with in primitive frame 16 object block TGi (i=0,1,2 ..., 15), and in reference frame 102, being provided with corresponding to primitive frame 16 object block TGi 16 projection image 104i (i=0,1,2 ... 15).Then, be provided with hunting zone 105i corresponding to each projection image (i=0,1,2 ... 15), and each hunting zone 105i (i=0,1,2 ... 15) in create corresponding target block SAD table TBLi (i=0,1,2 ... 15).
Then, present embodiment is shown the motion vector that TBLi detects each object block by the SAD of each establishment, that is, and and every block motion vector BLK_Vi.
Then, calculate translational component and the anglec of rotation of reference frame according to a plurality of every block motion vector BLK_Vi basically, and the anglec of rotation of translational component that use to calculate and calculating is added to reference frame on the primitive frame with respect to primitive frame.When in each frame period, the primitive frame sequential update being next frame, repeat said process, so that frame order stack is each other trembled the high quality graphic of influence thereby can obtain to eliminate hand.
In this case, as shown in Figure 3, when with two or more frame mutual superposition, in fact first frame is set to benchmark, and subsequent frame is superimposed on first frame.Therefore, translational movement between the tight preceding frame of second and subsequent frame that be applied and second and the subsequent frame that will be applied and the addition of anglec of rotation order are to obtain the translational movement and the anglec of rotation with respect to first frame.
[calculating first example of the method for the translational movement and the anglec of rotation]
Use piece coupling (in this manual, the piece coupling will be known as detection) to determine that translational movement between primitive frame and the reference frame and a kind of method of the anglec of rotation are to determine the method for the translational movement and the anglec of rotation generally with respect to the overall motion vector of primitive frame according to reference frame.That is, overall motion vector is represented the frame discussed with respect to the moving of previous frame, thereby can be by former state as translational movement.
Particularly, the component on the horizontal direction of overall motion vector (x direction) is the translational movement on the horizontal direction, and the component on the vertical direction of overall motion vector (y direction) is the translational movement on the vertical direction.
With respect to the overall motion vector that obtains at previous frame, the relative anglec of rotation of the overall motion vector that is obtained at the frame of paying close attention to this moment (reference frame) is to pay close attention to the relative anglec of rotation of frame with respect to previous frame this moment.
As the method for calculating overall motion vector in this case, can adopt a kind of like this method, wherein, as under the situation of existing piece coupling, carry out the majority decision based on every block motion vector BLK_Vi that 16 object block are detected, and calculate every block motion vector of most decision maximums (top) (maximum number of the mutually the same or every block motion vector that is equal to of size and Orientation), motion vector as a whole.
Yet, there is following point in the peaked every block motion vector of the most decisions of the calculating this method of motion vector as a whole, wherein, when the moving image of reference object and during the taking moving image to as if when the tree of for example ripply water surface or bending when blowing or grass, can detect wrong overall motion vector (hand is trembled vector).Because present digital camera not only picks up and write down rest image, also picks up and write down moving image, do not realize so do not expect the method for calculating overall motion vector by the majority rule fixed system.
Therefore, present embodiment shows to calculate overall motion vector according to the total SAD that uses the total sad value that next will describe.
Particularly, when 16 SAD of 16 object block will creating as mentioned above table TBLi arrange in vertical direction so that during its as shown in Figure 2 mutual stack, present embodiment is aggregated in the sad value at the reference block locations place that its value corresponds to each other in the hunting zone that obtains each SAD table TBLi, thereby obtains to add up to total and (be called and add up to sad value) of difference absolute value.Then, create the total SAD table of a plurality of reference block locations in the hunting zone, as comprising the SAD table that adds up to sad value.
In this case, coordinate position in adding up to SAD table SUM_TBL (x, (x is to make TBLi (x y) to the total sad value SUM_TBL that y) locates, y) be the corresponding coordinate (x among each SAD table TBLi, y) sad value of locating, and SUM_TBL (x, y)=TBL1 (x, y)+TBL1 (x, y)+...+TBL16 (x, y)=∑ TBLi (x, y) (referring to (equation 3) among Fig. 4).
Then, present embodiment detects the motion vector (as in image pick-up device hand tremble the overall motion vector of vector) of reference picture with respect to raw frames from add up to SAD table SUM_TBL.
As being used for according to adding up to SAD table SUM_TBL to calculate the method for overall motion vector, the corresponding reference vector in detection position that can use the minimum value that detects position that adds up to the minimum value that adds up to sad value among the SAD table SUM_TBL and the total sad value that detects and detected is the existing method of motion vector as a whole.
Yet, use to add up to sad value only to provide motion vector with a pixel unit precision as this method of minimum value.The total sad value of the minimum value position of present embodiment by use adding up to sad value and add up near a plurality of total sad values the total sad value of minimum value position of sad value to carry out approximate curved surface interpolation calculates overall motion vector.Promptly, the total sad value of minimum value position that can be by use adding up to sad value and add up near the total sad value of minimum value position of sad value a plurality of total sad values to generate approximate high order curved surface and detect the position of the minimum value of approximate high order curved surface is to calculate overall motion vector less than the precision of a pixel unit.To describe the process of approximate curved surface interpolation after a while in detail.
Therefore, be equal to the result of whole piece couplings of whole frames owing to comprise the total SAD table that adds up to sad value, even so be difficult to handle under the situation of most definite systems at the object of above-mentioned moving image, the overall motion vector that obtains from add up to the SAD table is still very accurate.
Therefore, can obtain the translational movement and the anglec of rotation according to the overall motion vector of acquisition from add up to the SAD table, and as mentioned above a plurality of frames be superposeed each other with respect to primitive frame.
Incidentally, the overall motion vector that obtains this moment is not limited to the total motion vector of acquisition from add up to the SAD table; For example, can determine peaked every block motion vector to be set to overall motion vector by the majority that the majority rule fixed system obtains.Yet the reason of expectation acquisition total motion vector as mentioned above.
[calculating second example of the method for the translational movement and the anglec of rotation]
As the method for calculating the translational movement and the anglec of rotation, can adopt a kind of like this method, it determines the translational movement and the anglec of rotation of frame according to a plurality of every block motion vector that reference frame is calculated, rather than calculates overall motion vector and use the overall motion vector that calculates to calculate the translational movement and the anglec of rotation.
Obtain the translational movement of frame in theory, as the mean value of amount of movement on the horizontal direction of 16 every block motion vectors and vertical direction.When the hunting zone corresponding to a plurality of projection images of a plurality of object block is known as detection block, as shown in Figure 5, can in a reference frame, provide detection block i (=0,1,2 ..., 15).
Then, making Vxi is the horizontal direction component of every block motion vector of detection block i, making Vyi is the vertical direction component of every block motion vector of detection block i, and every block motion vector is expressed as (Vxi, Vyi), shown in (equation 4) and (equation 5) of Fig. 6, can obtain as translational movement α on the horizontal direction (x direction) of the mean value of the horizontal direction component of 16 every block motion vectors and vertical direction component and the translational movement β on the vertical direction (y direction).
Can use 16 every block motion vectors to obtain the anglec of rotation γ of frame in principle, as follows.
At first, as Fig. 5, the detection block of a reference frame number is defined as shown in Fig. 7 A.At this moment, shown in Fig. 7 A, the size of supposing a detection block is 2a (horizontal direction) * 2b (vertical direction), wherein,
A=(Horizontal number of pixels of a reference block)+(with the horizontal interval (pixel count) of adjacent reference block)
B=(Vertical number of pixels of a reference block)+(with the perpendicular separation (pixel count) of adjacent reference block)
Next, obtain the coordinate system shown in Fig. 7 B, with the center O c of all detection block of detection block number 0~15 as initial point.Then, shown in Fig. 7 C and 7D, definition is corresponding to value Pxi and the value Pyi of detection block i.Value Pxi and value Pyi represent the center weight of the distance on (x direction) and the vertical direction (y direction) in the horizontal direction from the center O c of all detection block to each detection block.
Use value Pxi and value Pyi can be by (Pxia Pyib) represents the detection block centre coordinate of each the detection number of rectifying i.
Therefore, make that (α β) is the translational movement of frame, and γ is the anglec of rotation of frame, and theoretical every block motion vector of detection block i can be expressed as (equation 6) shown in Fig. 8 (A).
Incidentally, trembling the measurement anglec of rotation γ that is produced by the hand of a plurality of objects under 3 fps situations is
γ[rad]=arctan1/64=0.0156237...
Therefore, can suppose cos γ ≈ 1 and sin γ ≈ γ.So theoretical every block motion vector Wi can be expressed as (equation 6).
Every block motion vector BLK_Vi of the actual detected of detection block i is abbreviated as Vi, the error ε i between every block motion vector Vi of theoretical every block motion vector Wi and actual detected 2Be expressed as (equation 7) among Fig. 8 (B).Carry out the partial differential of error as (equation 8) among Fig. 8 (C) for anglec of rotation γ.
Incidentally, in Fig. 8 (C), " δ F/ δ γ " representative function F (γ) is about the partial differential of anglec of rotation γ.
Every block motion vector of the actual detected of hypothetical reference frame comprises actual anglec of rotation γ just, then passes through the sum of the deviations ∑ ε i of all a plurality of monolithic motion vector Vi of reference frame 2The value that partial differential obtained about anglec of rotation γ should be zero.Therefore, anglec of rotation γ is expressed as (equation 9) among Fig. 8 (D).
Therefore, by (equation 10) shown in Fig. 8 (E) can determine reference frame with the anglec of rotation γ that determines.
[with the method example of the high precision computation translational movement and the anglec of rotation more]
Under the situation of rest image, even the precision of the translational movement that obtains by overall motion vector or a plurality of every block motion vector and the anglec of rotation is still not enough.
Therefore, consider this point, present embodiment calculates the translational movement and the anglec of rotation with high accuracy more, and uses the high-precision translational movement and the high-precision anglec of rotation two field picture that superposes.
As mentioned above, because the object that moves etc., trembling the viewpoint of vector detection from hand, is not that all a plurality of every block motion vectors that a reference frame is obtained all are highly reliable.
Therefore, the present invention is following to determine the reliability of a plurality of every block motion vectors that a reference frame is obtained, and only uses and be judged as highly reliable every block motion vector and calculate the translational movement and the anglec of rotation.Thereby, improved the precision of the anglec of rotation of the translational movement that calculates and calculating.
That is, present embodiment is eliminated mobile motion of objects vector component as much as possible, and its component will be not included in its motion vector and be trembled in the motion vector of the picture integral body that is produced by hand, make it possible to calculate the translational movement and the anglec of rotation with higher precision.
For this reason, present embodiment makes the overall motion vector that calculates at the reference frame of being discussed, promptly, the overall motion vector that total SAD table from this example obtains (hereinafter, this overall motion vector will be known as and add up to motion vector) SUM_V with from the SAD table TBLi of each object block (i=1,2 ..., 15) every block motion vector BLK_Vi of obtaining compares, and the search height reliable every block motion vector identical or approximate with overall motion vector.
When the numerical value of highly reliable every block motion vector during, determine in the present embodiment that the frame that will be discussed is as frame that will stack less than predetermined threshold.Skip in the present embodiment rest image of the frame discussed is handled, so that next frame is handled.
When the numerical value of highly reliable every block motion vector is higher than predetermined threshold, show to calculate having of describing after a while less than the every block motion vector of the high accuracy of a pixel precision according to SAD with the corresponding object block of the reliable every block motion vector of the height that calculates.Then, only use the every block motion vector of high accuracy that calculates to carry out the calculating of above-mentioned translational movement and the detection of the anglec of rotation.
In this case, above-mentioned first example and second example can be used to calculate translational movement.
For example, when using above-mentioned second example, the every block motion vector of high accuracy that only uses the detection block to height reliable detection frame numbering i in 16 detection block shown in Fig. 5 to obtain calculates translational movement.Yet, in the process of calculating translational movement, present embodiment is not only removed low reliability from the object that calculates translational movement detection block number be the detection block of q, and removal with respect to the detection block of the position of the detection block q symmetry of the central point Oc of all detection block and removal number is the every block motion vector in the detection block of (15-q).
This is because in the present embodiment, considered the frame rotation, therefore, to be judged as reliability low when being arranged in every block motion vector with respect to the detection block at the symmetric position place of central point Oc, thereby when it is removed from the object that calculates translational movement, unless from the object that calculates translational movement, remove other the every block motion vector that is in respect to the position of this point and a detection block symmetry, otherwise will produce the error of translational movement result of calculation.
On the other hand, when calculating the anglec of rotation, only from the object of the calculating anglec of rotation, remove the every block motion vector in the detection block that is judged to be low reliability, and be arranged in the object that is included in the calculating anglec of rotation with respect to this point and the every block motion vector of high accuracy of the detection block of the position of the detection block symmetry of removing.
As mentioned above, only use height reliable detection frame to calculate the translational movement and the anglec of rotation of frame.Therefore, can expect with the high precision computation translational movement and the anglec of rotation.
As described below, carrying out above-mentioned is that the reliability of unit determines that promptly, a plurality of every block motion vector reliabilities determines in the frame with the frame.
At first, obtain with as a plurality of object block that are arranged on the piece in the primitive frame (in this example be 16 object block TGi (i=1,2 ... 15)) in each corresponding SAD table TBLi, and the coordinate position of minimum sad value MINi obtains every block motion vector BLK_Vi (referring to Fig. 9 A) from the SAD table.Next, obtain to add up to SAD table SUM_TBL according to above-mentioned (equation 3) by 16 SAD table TBLi, and obtain to add up to motion vector SUM_V (referring to Fig. 9 B) by the coordinate position that adds up to the minimum sad value MINs among the SAD table SUM_TBL.
Next, in the present embodiment, by adding up to motion vector BLK_V, promptly, as the coordinate position of the minimum sad value MINs among the total SAD of the benchmark table SUM_TBL, based on each the sad value in every block motion vector BLK_Vi (that is the coordinate position that, adds up to minimum sad value MINs among the SAD table SUM_TBL) and 16 object block, to each execution condition judgment shown in Figure 10 in 16 object block, and carry out mark shown in Figure 10 (assessment mark) mark and calculating.
Figure 11 is the flow chart of the process instance of expressive notation and fractional computation.Therefore the processing of Figure 11, repeats the process of Figure 11 at a reference frame to each frame.
At first, carry out whether equaling to add up to the determining of first condition (step S11) of motion vector SUM_V about the motion vector BLK_Vi that the subject object piece that will carry out mark and fractional computation obtains.This is equivalent to determine whether the coordinate position of minimum sad value MINi in the SAD of subject object piece table TBLi equals the coordinate position of the minimum sad value MINs in total SAD form SUM_TBL.
When definite subject object piece satisfies first condition, provide the mark of " TOP " to the subject object piece, and distribute largest score value " 4 " (step S12) for the subject object piece in the present embodiment.
When definite subject object piece does not satisfy first condition, determine whether the subject object piece satisfies second condition (step S13).Although second condition for motion vector BLK_Vi and the motion vector BLK_Vi that adds up to motion vector SUM_V to differ from one another but the subject object piece is obtained with add up to whether motion vector SUM_V is to be the most contiguous vector on the SAD table.Particularly, second condition is whether the coordinate position of the minimum sad value MINi among the SAD table TBLi of subject object piece is adjacent one another are with the coordinate position that adds up to the minimum sad value MINs among the SAD table SUM_TBL, and differs a coordinate figure on vertical direction, horizontal direction or diagonal each other.
When definite subject object piece satisfies second condition, provide the mark of " NEXT_TOP " to the subject object piece, and distribute fractional value " 2 " (step S14) for the subject object piece in the present embodiment.
When definite subject object piece does not satisfy second condition, determine whether the subject object piece satisfies the 3rd condition (step S15).The 3rd condition is in the SAD of subject object piece table, and whether the sad value (minimum sad value MINi) at the coordinate position place that is represented by every block motion vector BLK_Vi is equal to or less than predetermined threshold with difference corresponding to the sad value at the coordinate position place of the coordinate position of being represented by the total motion vector SUM_V that adds up on the SAD table (coordinate position of minimum sad value MINs).In this case, expectation is converted to predetermined threshold the threshold value of each pixel.This is because the present embodiment supposition is trembled with the accuracy correction hand of a pixel.
When definite subject object piece satisfies the 3rd condition, provide the mark of " NEAR_TOP " to the subject object piece, and distribute fractional value " 1 " (step S16) for the subject object piece in the present embodiment.
When definite subject object piece does not satisfy the 3rd condition, provide the mark of " OTHERS " to the subject object piece, and distribute fractional value " 0 " (step S17) for the subject object piece in the present embodiment.
After the mark and allocation scores in completing steps S12, step S14, step S16 and step S17, the mark that adds up and distribute is to calculate gross score sum_score (step S18).
Next, determine whether 16 all object block in have been finished said process (step S19).When 16 all object block not being finished above-mentioned processing, provide the instruction (step S20) of next object block being carried out mark and fractional computation.Afterwards, turn back to step S11 to repeat said process.
When 16 all object block have been finished above-mentioned processing in determining to a frame, finish the mark of 16 all in frame object block and the handling procedure of fractional computation.At this moment, the gross score sum_score that calculates in step S18 is as the gross score of all 16 object block.
Incidentally, the flow chart of Figure 11 is an example; Can order at random whether satisfy determining, whether satisfy the definite of second condition and whether satisfying determining of the 3rd condition of first condition, and can at first carry out any and determine.
Be through with above-mentionedly in to a frame after the handling procedure of the mark of 16 all object block and fractional computation, the gross score sum_score that calculates and the threshold value of reliability are being compared mutually.At this moment, when gross score sum_score is lower than threshold value, can determine the detection for overall motion vector, the motion vector reliability that obtains in described frame is very low.
Alternatively, thus calculate and to satisfy first condition and second condition is judged as high reliability and provides " TOP " and the numerical value of every block motion vector of the object block of " NEXT_TOP " mark.When this numerical value is lower than predetermined threshold, can determine detection for overall motion vector, the motion vector reliability that obtains in described frame is very low.
When gross score sum_score is a threshold value or when higher, perhaps when the numerical value of every block motion vector of the object block that is labeled as " TOP " and " NEXT_TOP " be predetermined threshold or when bigger, the detection that can determine the overall motion vector that obtains in described frame has reliability to a certain degree.
So, when gross score sum_score is a threshold value or when higher, perhaps when the numerical value of every block motion vector of the object block that is labeled as " TOP " and " NEXT_TOP " be predetermined threshold or when bigger, only use the sad value of the SAD table of the reliable object block of height (being labeled as " TOP " and " NEXT_TOP ") that satisfies first condition and second condition to regenerate and add up to SAD to show.Recomputate the total motion vector of motion vector as a whole based on the total SAD table that regenerates.Can calculate the translational movement and the anglec of rotation of frame by the total motion vector that recomputates.
The overall motion vector that obtain this moment is not limited to by the total motion vector that adds up to the SAD table to obtain, and every block motion vector that for example can be based on high reliability determines to obtain by majority.
In addition, replacement is calculated the translational movement and the anglec of rotation by overall motion vector, (equation 4)~(equation 10) based on reference Fig. 6~Fig. 8 E description, (α is β) with anglec of rotation γ can only to use the every block motion vector with expression " TOP " mark of high reliability and " NEXT_TOP " mark to determine translational movement.
As mentioned above, present embodiment adopted the every block motion vector that only uses " TOP " mark with expression high reliability and " NEXT_TOP " mark calculate translational movement (α, β) and the method for anglec of rotation γ.
Yet in order to obtain higher reliability, present embodiment is carried out following the processing.
Present embodiment dwindles the hunting zone of each object block gradually, and in the multistage execution block matching treatment (hereinafter, will be known as detection) for the piece of whole frames coupling.In the following embodiments, execution block coupling (detection) in two stages.
Shown in Figure 12 A, the hunting zone SR_1 for each object block TGi in first detection is set to maximum, and obtains above-mentioned a plurality of every block motion vector BLK_Vi.After finishing first detection and having calculated every block motion vector of a plurality of object block, assess a plurality of every block motion vectors, and the every block motion vector with high assessed value is searched for.Only use every block motion vector to carry out above-mentioned (equation 4) and (equation 5) with high assessed value, with the translational movement that obtains first detection (α, β).Then, by first translational movement that detects determine second detect in to the hunting zone of each object block.
Optionally, can calculate overall motion vector (hand is trembled vector) by piece with high assessed value, make and to calculate first translational movement that detects by overall motion vector, then by first translational movement that detects determine second detect in to the hunting zone of each object block.
Shown in Figure 12 A, in the hunting zone SR_1 that in first process, is provided with, calculate every block motion vector BLK_Vi of each object block, and when calculating translational movements or calculating translational movement, can detect the piece scope that between reference frame and primitive frame, has correlation roughly according to the translational movement that calculates by overall motion vector by a plurality of every block motion vectors.
Therefore, shown in Figure 12 B, the narrower hunting zone, hunting zone in detecting than first be set to second detect handle in the hunting zone SR_2 of each object block, its with the piece scope that has correlation between reference frame and the primitive frame as the center.In this case, shown in Figure 12 B, the offset (hunting zone skew) during the center Poi_1 of hunting zone SR_1 and second detects in first detection between the center Poi_2 of hunting zone SR_2 is corresponding to the translational movement (corresponding to overall motion vector) that detects in first detects.
Therefore, use testing process to the hunting zone SR_2 that narrows down of each object block to provide and have, as second result who detects than more high-precision matching result of first order testing process.
Therefore, present embodiment uses the every block motion vector that has high assessed value in second detects in the every block motion vector that obtains to calculate the translational movement and the anglec of rotation of frame as described above.Thereby, can obtain to have the high-precision translational movement and the anglec of rotation.
The total SAD that uses in present embodiment table be equal to substantially all frames whole pieces couplings the result rather than to the SAD table of each piece.Under the situation of common object, the motion vector that still exists after the majority decision of describing in the prior art (that is the peaked motion vector of majority decision) is equal to each other with the total motion vector of acquisition from add up to the SAD table.Yet, under the situation of a plurality of frame mutual superposition, when frame because another person opens photoflash lamp integral body when flashing, maybe when to as if during water surface ripple etc., the results of most decisions are approaching motion vectors at random of low reliability.On the other hand, derive near the result of normal solution as a comparison and add up to the possibility of motion vector very big.
Therefore, by comparing mutually with the overall motion vector of determining by majority decision, can determine present frame result's reliability at least quantitatively according to the result of the total motion vector that adds up to the SAD table to obtain.Existing motion is primarily aimed at the reliability of the motion vector of determining every.On the other hand, present embodiment is characterised in that realizes that stable hand trembles corrective system, and this system provides more natural image under the principle of paying attention to all frames, and from removing insecure frame the frame of mutual superposition.
Consider this point, as under the situation of existing piece coupling, a kind of method of present embodiment is carried out the majority decision based on every block motion vector BLK_Vi that 16 object block are detected, and the majority of calculating kinematical vector decision maximum (maximum number of every block motion vector that size and Orientation is mutually the same or equal).
Then, total motion vector among majority decision maximum replacement Figure 10 of use detection motion vector is as benchmark, based on sad value execution mark and the allocation scores as shown in figure 10 of every block motion vector that 16 object block are detected and each every block motion vector BLK_Vi.
This is equivalent to use majority to determine peaked motion vector to replace the total motion vector SUM_V among Figure 10.
Particularly, whether first condition is equal to each other with the peaked motion vector of most decisions for every block motion vector BLK_Vi that the object block that will carry out mark and fractional computation is obtained.That is, determine whether the coordinate position of the minimum sad value MINi among the SAD table TBLi of subject object piece equals the coordinate positions of the peaked motion vector of most decisions.
Whether the motion vector BLK_Vi even second condition differs from one another with the peaked motion vector of most decisions for motion vector BLK_Vi to the acquisition of subject object piece is the most contiguous vector on the SAD form with the peaked motion vector of most decisions.Particularly, second condition be the minimum sad value MINi among the SAD table TBLi of subject object piece coordinate position whether with determine that corresponding to majority the coordinate position of peaked motion vector is adjacent one another are, and on vertical direction, horizontal direction or diagonal, differ a coordinate figure each other.
The 3rd condition is in the SAD of subject object piece table, and the sad value (minimum sad value MINi) at the coordinate position place that is represented by every block motion vector BLK_Vi and its position determine corresponding to majority whether the difference of the sad value at the coordinate position place on the SAD table of peaked motion vector is equal to or less than predetermined threshold.
As mentioned above, the motion vector of 16 object block of a frame is carried out determined mark and the allocation scores of peaked motion vector as benchmark with majority.Then, the gross score many_score of dispensed mark.
Then, when when the difference of the coordinate position of the minimum sad value that adds up to motion vector SUM_V and the coordinate position of the sad value that determines peaked motion vector corresponding to majority is in predetermined value (for example, when being on duty in a neighbor), sum_score is equal to or higher than predetermined threshold when gross score, and when gross score many_score was equal to or higher than predetermined threshold, present embodiment determined that the motion vector that described frame obtains is had high reliability.
On the other hand, when when the difference of the coordinate position of the minimum sad value that adds up to motion vector SUM V and the coordinate position of the sad value that determines peaked motion vector corresponding to majority is not in predetermined value (for example, when being on duty not in a neighbor), definite hand that can not obtain high-reliability from described frame is trembled vector, and removes described frame from a plurality of frames of mutual superposition.
In addition, when gross score sum_score is lower than predetermined threshold, or gross score many_score is when being lower than predetermined threshold, determines that the hand that can not obtain high-reliability from described frame trembles vector, and from a plurality of frames of mutual superposition the described frame of removal.
Then, only when determining that reliability is higher as mentioned above, in this example, only using SAD table with the sad value of the object block that adds up to motion vector to carry out as benchmark to provide in the mark of object block of mark " TOP " and " NEXT_TOP " mark to generate adds up to SAD to show RSUM_TBL again.
Then, by being similar to the sad value that the curved surface interpolation is applied to minimum sad value and adds up to coordinate position place adjacent with minimum sad value among the SAD table RSUM_TBL again, calculate overall motion vector (total motion vector).Then, use the total motion vector that calculates, the hunting zone when determining second detection, or calculate the translational movement and the anglec of rotation.
Alternatively, only use every block motion vector of the object block that provides " TOP " and " NEXT_TOP " mark and use above-mentioned (equation 4) and (equation 5), calculate translational movement, hunting zone when determining second detection, perhaps carry out and calculate, and calculate the translational movement and the anglec of rotation based on above-mentioned (equation 4)~(equation 10).
Incidentally, can be with interosculating of proposing now, with further improvement reliability and precision according to the method for the motion vector frequency predication motion vector (overall motion vector) on the time-axis direction and method according to the abovementioned embodiments of the present invention.
As mentioned above, every block motion vector is shown and calculated to present embodiment to each the generation SAD in a plurality of object block in the frame.In this case, when present embodiment being applied to use image pick-up device with the existing image pick-up element that surpasses 5 mega pixels, be difficult on the side circuit scale, realize present embodiment that this is because the scale of SAD table and the pixel count of a picture increase pro rata.
Feasible actual proposal comprises above-mentioned patent documentation 3 (Japan Patent discloses 2005-38396 number).Disclosed device comprises in the patent documentation 3: be used to obtain device by the motion vector that dwindles the size that conversion produces of image, and the device that is used between a plurality of, sharing identical SAD table.Dwindling conversion and share identical SAD table between a plurality of of image is the extraordinary method that is used to realize reducing the SAD table size, and is used in other field of motion vector detection in MPEG (Motion Picture Experts Group) image compression system for example and scene change-detection.
Yet, there is following point in the algorithm of patent documentation 3: memory (DRAM (dynamic ram (random access memory))) visit elapsed time and memory space during the dwindling conversion and dwindle conversion of image, and because a plurality of time-divisions visit SAD table, so increased memory access greatly and this handles also holding time.The hand of moving image is trembled the minimizing that correction needs real-time performance and system delay time simultaneously, so the processing time becomes problem.
The result of many people assessment for example be illustrated in the situation assistant of the rest image of three frame/seconds (3fps) tremble scope with entire frame as about 100 ± 10%.Suppose to obtain from the market the high-end devices of 1,002 mega pixels, estimate that then the size of the SAD table that the present technology that proposes is required is about 80,000,000.In addition, when satisfying actual treatment speed, the memory of storage SAD table information need be built-in SRAM (static RAM (SRAM) (random access memory)).Although the semiconductor technology rule is developed, this size deviates from about 3 numerical digits of actual grade.
In addition, image dwindles conversion and need be used to carry out as dwindling the pretreated removal aliasing (aliasing distortion) of processing and the low pass filter of low-light (level) noise.Yet, according to the characteristic of reduce in scale factor change low pass filter, especially when low pass filter is many tap number character filter in vertical direction, need many line memories and operation logic, therefore, the problem that circuit scale increases has appearred.
Therefore, consider above problem, present embodiment is used for can dwindling greatly the image process method and the device of SAD table size when the overall motion vector that uses between piece coupling calculating two frames.
In addition, about dwindling the method that SAD shows by the conversion of dwindling as patent documentation 3 described images, produced two problems: the increase that the processing time of dwindling to relate in the conversion of image and memory space consume, and realize being used to avoiding accompanying image to dwindle conversion and the related circuit of suitable low pass filter of the aliasing that occurs increases.Present embodiment can address these problems.
Particularly, substitute corresponding to the reference vector storage object block of reference block and the sad value between the reference block, present embodiment dwindles reference vector, distribute sad value and make its with the corresponding reference vector that dwindles and with a plurality of reference vector additions of the reference vector adjacency of dwindling, thereby store sad value.
Thus, with existing SAD epiphase ratio, present embodiment has dwindled the size of SAD table greatly, and having solved two problems, these two problems comprise that dwindle problem that processing time related in the conversion and storage space consumption increases and the realization of image are used to avoid accompanying image to dwindle conversion and problem that the related circuit of suitable low pass filter of the aliasing that occurs increases.
Figure 13~Figure 15 is the diagrammatic sketch that aid illustration is used for the new piece coupling summary of present embodiment.Figure 13 shows the SAD that dwindles that generates in the image processing method of existing SAD table TBLo and present embodiment and shows relation between the TBLs.
In the present embodiment, the existing example shown in Figure 74 as each center, is provided with a plurality of hunting zones with the position of a plurality of object block (being 16 object block in this example) of being provided with in the primitive frame in reference frame.Then, as mentioned above, a plurality of reference blocks are arranged in each of a plurality of hunting zones, and in the brightness value that obtains pixel in each reference block and the object block difference of the brightness value of respective pixel absolute value with, that is, and sad value.
Existing ground as shown in figure 13, is written in the sad value that obtains among the SAD table TBLo corresponding to the place, address of the reference vector RV of described reference block, as table element tb1.
Therefore, in existing coupling, the reference vector RV of the offset between expression object block and the reference block in the two field picture is corresponding one by one each other with the sad value of the reference block of each table element of showing TBLo as SAD.That is, existing SAD table TBLo has its number equals the number of the reference vector RV that can obtain in the hunting zone a plurality of sad value table elements.
On the other hand, shown in Figure 13, Figure 14 A and Figure 14 B, in according to of the present invention coupling, the reference vector RV of described reference block is reduced into the reference vector CV that dwindles with the minification of 1/n (n is a natural number).
In the following description, for convenience, horizontal direction reduce in scale factor is identical with vertical direction reduce in scale factor.Yet horizontal direction reduce in scale factor and vertical direction reduce in scale factor can be different values independent of each other.In addition, as will be described later, the horizontal direction reduce in scale factor and the vertical direction reduce in scale factor that are set to any natural number/one independently of one another are flexible more and convenient.
In addition, in the present embodiment, existing as mentioned example is described, position as the object block at center, hunting zone is set to reference position (0,0), reference vector represent from the reference position begin with the pixel be unit in the horizontal direction with vertical direction on skew (vx, vy) (vx and vy are integers), and with each reference vector RV be expressed as reference vector (vx, vy).
By in the horizontal direction with (vx, (vx/n, (vx/n can not be integer vy/n), and can comprise fractional part in vy/n) represented position vy) to dwindle the reference vector that dwindles that 1/n obtains with reference vector on each direction of vertical direction.Therefore, in the present embodiment,, produce error when being stored as corresponding to the sad value that the original reference vector RV before dwindling obtains corresponding to when dwindling the table element of the immediate reference vector of reference vector CV.
Therefore, in the present embodiment, at first detect by with reference to dwindling the position that vector CV represents (vx/n, vy/n) near a plurality of positions (table element) of representing of a plurality of reference vectors.Then, divide the sad value that the reference block of pairing reference vector RV obtains and make itself and the corresponding sad value addition of a plurality of adjacent reference vector of expression detection position.
In this case, in the present embodiment, as will being assigned with and the value of addition of the component that will be written into table element tb1 (corresponding in the position of representing by a plurality of adjacent reference vector that dwindles around the position that reference vector CV represents), use is by the relation of dwindling between the position that the adjacent reference vector with each of reference vector represents, calculates corresponding to adjacent reference vector by the sad value that obtains corresponding to the original reference vector RV before dwindling will be assigned with and the sad value of addition.Make of the table element component addition of the sad value of each calculating as corresponding reference vector.
In this case, not only distribute and the addition sad value is meant a plurality of different a plurality of reference vectors that reference vector duplicate detection vicinity is dwindled reference vector that dwindle, thus for a reference vector that a plurality of sad values are added together.
Incidentally, when by dwindling the position (vx/n that reference vector CV represents, when vy/n) being complementary with the position of representing by reference vector, promptly, when the value of vx/n and vy/n is integer, do not need to detect a plurality of adjacent reference vectors, and corresponding to expression position (vx/n, the sad value that reference vector storage vy/n) obtains corresponding to the original reference vector RV before dwindling.
Next, will above-mentioned processing be described in conjunction with instantiation.For example, when the position of object block is set to benchmark (0,0) time, by with the position (3 shown in the presentation graphs 14A,-5) reference vector RV is contracted to dwindling with reference to dwindling the position (0.75 ,-1.25) shown in the vector CV presentation graphs 14B that 1/n=1/4 obtains in the horizontal direction with on the vertical direction.
Therefore, comprise fractional part by dwindling the position that reference vector CV represents, and do not match with the position of representing by reference vector.
Therefore, in this case, as shown in figure 15, detect expression with by a plurality of adjacent reference vector that dwindles the position position adjacent that reference vector CV represents.In the example of Figure 15, dwindle reference vector CV to one and detect 4 adjacent reference vector NV1, NV2, NV3 and NV4.
Then, as mentioned above, in the present embodiment, the sad value that the reference vector RV of branch pairing reference block obtains also makes it as the sad value addition corresponding to 4 adjacent reference vector NV1, NV2, NV3 and NV4.
In this case, in the present embodiment, use is represented by dwindling reference vector CV, position P0 (cross mark among Figure 15) and the relation of the position between position P1, P2, P3 and the P4 (circles mark among Figure 15) that represent by 4 adjacent reference vector NV1, NV2, NV3 and NV4 respectively, calculating will be assigned with and add to mutually the sad value of 4 adjacent reference vector NV1, NV2, NV3 and NV4, distribute as linear weighted function.
On the P0 ambient level direction of position with on 1: 3 and the vertical direction with 3: 1, will divide the line segment that is limited by the position P1, P2, P3 and the P4 that represent by 4 adjacent reference vector NV1, NV2, NV3 and NV4 respectively by dwindling in the position P0 that reference vector CV represents in the example of Figure 15.
Therefore, make S α as the sad value that obtains corresponding to reference vector RV before dwindling, be value SADp1, SADp2, SADp3 and the SADp4 that is assigned with and adds to mutually the corresponding SAD table element of representing with 4 adjacent reference vector NV1, NV2, NV3 and NV4 around the P0 of position respectively of position P1, P2, P3 and P4:
SADp1=Sα×9/16
SADp2=Sα×3/16
SADp3=Sα×3/16
SADp4=Sα×1/16
Then, in the present embodiment, value SADp1, SADp2, SADp3 and the SADp4 that obtains added to SAD table element corresponding to position P1, the P2, P3 and the P4 that are represented by 4 adjacent reference vector NV1, NV2, NV3 and NV4 respectively mutually.
In the present embodiment, all reference blocks in the hunting zone are carried out above the processing.
Therefore, in the present embodiment, when reference vector RV is contracted to 1/n, be enough to prepare by will have with all reference vectors one to one the SAD table TBLo of existing size be contracted to 1/n in the horizontal direction and be contracted to 1/n in vertical direction obtained dwindle SAD table TBLs, and will be defined as dwindling table element (referring to Figure 13) among the SAD table TBLs corresponding to the sad value of the reference vector adjacent with reference vector RV.
Therefore, the list cell prime number that dwindles SAD table TBLs in the present embodiment is the 1/n of the list cell prime number of existing SAD table TBLo 2Thereby, can reduce the size of showing greatly.
Incidentally, in the above description of present embodiment, detect and dwindle 4 adjacent reference vectors of reference vector CV, and the sad value that will calculate described reference block (reference vector RV) adds to SAD table element corresponding to 4 adjacent reference vectors as linear weighted function distribution value.Yet, be used to select contiguous method of dwindling a plurality of reference vectors of reference vector CV, and be not limited to above-mentioned example corresponding to the distribution and the addition method of the SAD table element of adjacent reference vector.
For example, dwindle 9 or 16 reference vectors of reference vector CV, and, can obtain higher precision based on the distribution and the addition of so-called cubic interpolation execution corresponding to the SAD table element of 9 or 16 adjacent reference vectors by detecting vicinity.When focusing on the reducing of real-time performance and function circuit, the linear weighted function of above-mentioned table element corresponding to 4 adjacent reference vectors distribute and addition more effective.
In the present embodiment, identical with existing method, reference block is moved to all positions in the hunting zone, and the sad value of all reference blocks is distributed to SAD table (showing for dwindling SAD in the present embodiment).
Yet, existing ground, reference vector is corresponding one by one each other with the address of SAD table element, therefore, enough to its simple SAD table that distributes.In method according to present embodiment, the sad value that distributes also addition that reference block is calculated, therefore, in dwindling the SAD table, reference vector (dwindling reference vector) is not each other one to one with the form address.Therefore, carry out the what is called of distributing according to the method needs of present embodiment by addition and distribute addition (assignment addition), rather than simply distribute the table address of sad value.In addition, for this point, at first need each table element in the SAD table (dwindling the SAD table) is carried out initialization (zero clearing).
In existing coupling, when the address transition that has the table element of minimum sad value and will have a table element of minimum value when search in the SAD that as above the finishes table was reference vector, the detection of motion vector was finished.
On the other hand, according to the SAD in the method for present embodiment table be corresponding to by dwindle that reference vector obtains dwindle reference vector dwindle the SAD table, therefore, the minimum value of in fact dwindling in the SAD table does not correspond to accurate motion vectors.
Certainly, be converted to reference vector by dwindling the table address that has the table element of minimum value in the SAD table, and make the reciprocal multiplication (that is, multiplying each other) of this reference vector and minification 1/n, but allow the to a certain degree device calculating kinematical vector of error with n.
Yet, in the time will calculating more accurate motion vectors, the table element that dwindles the SAD table carried out interpolation handle by as described below, with the accurate motion vector of accuracy detection (every block motion vector) of original vector.
Incidentally, in the superincumbent description, use the method for the piece coupling that adopts existing reference vector rather than dwindle reference vector, for a plurality of object block obtain the SAD table, sad value by respective coordinates position in the SAD table that is aggregated in a plurality of acquisitions obtains to add up to the SAD table, and by calculating overall motion vector to adding up to the SAD table to be similar to the curved surface interpolation.The interpolation that will describe is below handled equally can be as approximate curved surface interpolation in this case.
[being used to calculate first example of the interpolation processing of accurate motion vectors more]
First example that is used for calculating the interpolation processing of accurate motion vectors more is by the approximate method of dwindling a plurality of SAD table element values (sad value) of SAD table of quadratic surface.
Particularly, in dwindling SAD table, obtain to have the table element of minimum sad value (integer precision minimum value table element (integer precision table address)) and at pericentral a plurality of integer precision table elements of the minimum value table element of integer precision.Use the sad value of these table elements, determine the quadratic surface of sad value by least square method.Detect quadric minimum sad value.Detection is corresponding to the position (position that move the reference position from reference frame) of the minimum sad value that detects.The detection position is set to the minimum value table address (dwindling the vector (minimum value vector) of minimum sad value in the SAD table corresponding to expression) of decimal precision.
In this case, shown in Figure 16 A or 16B, for unique quadratic surface is set, at least need an integer precision minimum value table element tm and four integer precision table element t1, t2, t3 and the t4 adjacent with table element tm, locate this four integer precision table element t1, t2, t3 and t4, the both sides of table element tm are are all inserted and put by the integer precision table element.
Then, as shown in figure 17, in corresponding to the scope of dwindling reference vector of dwindling the SAD table in the hunting zone of reference frame, by position (0 as the target frame of reference position, 0), consider the axle vx/n and the axle vy/n of the side-play amount (corresponding to reference to dwindling vector) on horizontal direction and the vertical direction, the sad value axle is considered to the axle perpendicular to axle vx/n and axle vy/n, and hypothesis forms coordinate space by three axles.
Then, for example, in the coordinate space of Figure 17, the sad value by the sad value of integer precision minimum value table element tm and two table element t1 between integer precision minimum value table element tm and t3 generates a conic section.In the coordinate space of Figure 17, generate another conic section by the sad value of integer precision minimum value table element tm and other two the table element t2 between minimum value table element tm and the sad value of t4.Then, obtain to comprise the quadratic surface 201 of two conic sections by least square method.In coordinate space shown in Figure 17, generate quadratic surface 201.
Then, detect the minimum value 202 of the sad value that generates quadratic surface 201.Detection is corresponding to position (vx/n, vy/n) (position 203 among Figure 17) of minimum sad value.(vx/n vy/n) is measured as decimal accuracy table element (table address) in the detection position.Then, as shown in Figure 18, make vector (minimum value vector 204) multiply by n, obtain to have the motion vector 205 of original amplitude precision thus corresponding to the decimal accuracy table element that detects.
For example, as shown in figure 18, at the minimum value vector 204 that obtains according to the minimum value address that dwindles the decimal accuracy table element among the SAD table TBLs when reference vector is contracted to 1/4 is (0.777,-1.492) under the situation, by making (0.777,-1.492) multiply by 4 motion vectors that obtain 205 and be (3.108 ,-5.968).This motion vector 205 is the motion vectors with the regeneration of original image ratio.
Though above the situation of using integer precision minimum value table element tm and four table elements adjacent with the minimum table element tm of integer precision is described, but, preferably use more approaching table element in order to obtain the quadratic surface of sad value by least square method.Therefore, usually use table element in the rectangular area of the pericentral m of integer precision minimum value table element tm (horizontal direction) * m (vertical direction) (m be 3 or above integer) table element.
Yet, be not that the neighboar lists number of elements is the bigger the better.Use the table element of wide region can cause the increase of amount of calculation, and use increase according to the possibility of the minimum falsity in the part of picture pattern.Therefore, the table element of use in the rectangular area of the neighboar lists element of right quantity.
By following example present embodiment is described: the example of the example of the table element in the rectangular area of the neighboar lists element of use right quantity, the example that uses the table element in the rectangular area of pericentral 3 (horizontal direction) * 3 (vertical direction) table elements of integer precision minimum value table element tm and the table element of use in the rectangular area of pericentral 4 (horizontal direction) * 4 (vertical direction) table elements of integer precision minimum value table element tm.
[using the example of table element in the rectangular area of 3 * 3 table elements]
Figure 20 A and 20B represent to use the example of the table element (filling part in Figure 20 A) in the rectangular area of pericentral 3 (horizontal direction) * 3 (vertical direction) table elements of integer precision minimum value table element tm.
In the example of Figure 20 A and 20B, use the sad value of integer precision minimum value table element tm shown in Figure 20 A and 8 table elements adjacent with integer precision minimum value table element tm, generate the quadratic surface 201 shown in Figure 20 B by least square method.Then, detect the minimum value 202 of the generation quadratic surface 201 of sad value.Detection is corresponding to position (vx/n, vy/n) (position 203 among Figure 20 B) of minimum sad value.Detection position 203 is measured as the position (decimal precision minimum value table address) of decimal precision minimum value table element.
Then, as shown in figure 18, make vector (minimum value vector) 204 multiply by n, thereby obtain to have the motion vector 205 of original amplitude precision corresponding to the detection position 203 of decimal accuracy table element.
Calculating is as follows corresponding to the method for the position 203 of the minimum value 202 of sad value quadratic surface 201.As shown in figure 21, think that (x y) is origin (0,0) for the coordinate of integer precision minimum value table element tm.In this case, (promptly by three positions on three positions on the x direction of principal axis (that is, x=-1, x=0 and x=1) and the y direction of principal axis, y=-1, y=0 and y=1) combination represent the positions of table element around 8, therefore, these 8 positions are (1,-1), (0,-1), (1 ,-1), (1,0), (1,0), (1,1), (0,1) and (1,1).
The sad value of supposing each table element in the table of Figure 21 is S XyTherefore, for example, the sad value of integer precision minimum value table element tm (position (0,0)) is expressed as S 00, and the sad value of the table element that position, lower right (1,1) is located is expressed as S 11
Then, the position that can obtain to have integer precision minimum value table element tm by (equation A) shown in Figure 22 and (equation B) as initial point (0,0) (x, y) the decimal precision position on the coordinate (dx, dy).
In (the equation A) and (equation B) of Figure 22,
Work as x=-1, Kx=-1
Work as x=0, Kx=0
Work as x=0, Kx=1
In addition,
Work as y=-1, Ky=-1
Work as y=0, Ky=0
Work as y=0, Ky=1
Therefore, by the position as the integer precision minimum value table element tm of initial point (0,0) obtain the decimal precision position (dx, dy).Thereby, can according to the decimal precision position (dx, dy) and the position of integer precision minimum value table element tm detect position 203 with respect to the center, hunting zone.
[using the example of table element in the rectangular area of 4 * 4 table elements]
Figure 23 A and 23B represent to use the example that has the table element (filling part in Figure 23 A) in the rectangular area of 4 (horizontal direction) * 4 (vertical direction) table elements of integer precision minimum value table element tm in the center.
As under the situation of integer precision minimum value table element tm and 8 table elements (3 * 3) adjacent with integer precision minimum value table element tm, perhaps under the situation of integer precision minimum value table element tm and 24 table elements (5 * 5) adjacent with integer precision minimum value table element tm, when the m value is odd number, because integer precision minimum value table element tm always is positioned at the center of a plurality of table elements of the rectangular area of use, so determine the table scope that will be used simply.
On the other hand, as under the situation of integer accuracy minimum value table element tm and 15 table elements (4 * 4) adjacent with integer precision minimum value table element tm, when the m value is even number, because integer precision minimum value table element tm is not the center of a plurality of table elements that is positioned at the rectangular area of use, so need some devices.
Particularly, the left side neighboar lists element on the horizontal direction that will see from integer precision minimum value table element tm and the sad value of right side neighboar lists element compare mutually.Will be on having than the direction of the neighboar lists element of low value with have the table element adjacent and be used as neighboar lists element in the 4th row than the table element of low value.Similarly, the top neighboar lists element on the vertical direction that will see from integer precision minimum value table element tm and the sad value of below neighboar lists element compare mutually.Will be on having than the direction of the neighboar lists element of low value with have the table element adjacent and be used as neighboar lists element in the fourth line than the table element of low value.
In the example of Figure 23 A, the left side neighboar lists element on the horizontal direction of integer precision minimum value table element tm and the sad value of right side neighboar lists element are " 177 " and " 173 ".Therefore, row that will be adjacent with the right side of the right side neighboar lists element with low sad value " 173 " are as the 4th row.Above on the vertical direction of integer precision minimum value table element tm the neighboar lists element and below the sad value of neighboar lists element be " 168 " and " 182 ".Therefore, row that will be adjacent with the top of the top neighboar lists element with low sad value " 168 " is as fourth line.
In the example of Figure 23 A and 23B, use the sad value of integer precision minimum value table element tm and 15 table elements adjacent with integer precision minimum value table element tm, generate quadratic surface 201 by least square method.Then, detect the minimum value 202 of the generation quadratic surface 201 of sad value.Detection is corresponding to position (vx/n, vy/n) (position 203 among Figure 23 B) of minimum sad value.Detection position 203 is measured as the position (fractional accuracy minimum value table address) of decimal precision minimum value table element.
Then,, make vector (minimum value vector) 204 multiply by n, thereby obtain to have the motion vector 205 of original amplitude precision corresponding to the detection position 203 of decimal accuracy table element as above-mentioned shown in Figure 180.
In this example, calculating is as follows corresponding to the method for the position 203 of the minimum value 202 of sad value quadratic surface 201.Shown in Figure 24 A, Figure 24 B, Figure 24 C and Figure 24 D, think integer precision minimum value table element tm (x, y) coordinate is initial point (0,0).
In this example, need consider that four table elements shown in Figure 24 A, Figure 24 B, Figure 24 C and Figure 24 D dispose according to the position of integer precision minimum value table element tm in the rectangular area of 16 table elements.
In this case, be appreciated that from Figure 24 A, Figure 24 B, Figure 24 C and Figure 24 D, the position of table element is by four positions on the x direction of principal axis (promptly around 15, x=-1, x=0, x=1 and x=2 or x=-2) 15 positions representing with the combination of four positions (that is, y=-1, y=0, y=1 and y=2 or y=-2) on the y direction of principal axis.
The sad value of supposing each table element among Figure 24 A, Figure 24 B, Figure 24 C and Figure 24 D is S XyTherefore, for example, the sad value of integer precision minimum value table element tm (position (0,0)) is expressed as S 00, and the sad value of the table element that position (1,1) is located is expressed as S 11
Then, can be by the center of (equation C) shown in Figure 25 and (equation D) acquisition in the rectangular area of 15 table elements around integer precision minimum value table element tm and the integer precision minimum value table element tm as initial point (0,0) (x, y) the decimal precision position on the coordinate (dx, dy).
As shown in figure 26, when thinking (Kx, Ky) coordinate has center in the rectangular area of 15 table elements around integer precision minimum value table element tm and the integer precision minimum value table element tm as initial point (0,0) time, (the equation C) of Figure 25 and the Kx in (equation D) and Ky are the values corresponding to four table element configurations shown in Figure 24 A, Figure 24 B, Figure 24 C and Figure 24 D.
Particularly, under situation corresponding to Figure 24 A,
Work as x=-2, Kx=-1.5
Work as x=-1, Kx=-0.5
Work as x=0, Kx=0.5
Work as x=1, Kx=1.5
In addition,
Work as y=-2, Ky=-1.5
Work as y=-1, Ky=-0.5
Work as y=0, Ky=0.5
Work as y=1, Ky=1.5
Under situation corresponding to Figure 24 B,
Work as x=-2, Kx=-1.5
Work as x=-1, Kx=-0.5
Work as x=0, Kx=0.5
Work as x=1, Kx=1.5
In addition,
Work as y=-1, Ky=-1.5
Work as y=0, Ky=-0.5
Work as y=1, Ky=0.5
Work as y=2, Ky=1.5
Under situation corresponding to Figure 24 C,
Work as x=-1, Kx=-1.5
Work as x=0, Kx=-0.5
Work as x=1, Kx=0.5
Work as x=2, Kx=1.5
In addition,
Work as y=-2, Ky=-1.5
Work as y=-1, Ky=-0.5
Work as y=0, Ky=0.5
Work as y=1, Ky=1.5
Under situation corresponding to Figure 24 D,
Work as x=-1, Kx=-1.5
Work as x=0, Kx=-0.5
Work as x=1, Kx=0.5
Work as x=2, Kx=1.5
In addition,
Work as y=-1, Ky=-1.5
Work as y=0, Ky=-0.5
Work as y=1, Ky=0.5
Work as y=2, Ky=1.5
In addition, Δ x and Δ y represent with respect to (x, y) (Kx, Ky) side-play amount of coordinate of coordinate in each table element configuration of Figure 24 Δ, Figure 24 B, Figure 24 C and Figure 24 D among (the equation C) shown in Figure 25 and (the equation D).From Figure 26, be appreciated that
Under situation corresponding to Figure 24 A, Δ x=-0.5, Δ y=-0.5,
Under situation corresponding to Figure 24 B, Δ x=-0.5, Δ y=0.5,
Under situation corresponding to Figure 24 C, Δ x=0.5, Δ y=-0.5, and
Under situation corresponding to Figure 24 D, Δ x=0.5, Δ y=0.5.
Therefore, by the position as the integer precision minimum value table element tm of initial point (0,0) obtain the decimal precision position (dx, dy).Thereby, can according to the decimal precision position (dx, dy) and the position of integer precision minimum value table element tm detect position 203 with respect to the center, hunting zone.
[being used to calculate more second example of the interpolation processing of accurate motion vectors]
Second example that is used for calculating the more interpolation processing of accurate motion vectors uses the sad value that comprises a plurality of table elements on the horizontal direction of dwindling SAD table integer precision minimum value table element to generate cubic curve in the horizontal direction, and use the sad value of a plurality of table elements on the vertical direction that comprises integer precision minimum value table element to generate cubic curve in vertical direction, detect the minimum value position (vx of each cubic curve, vy), and the detection position be set to decimal precision minimum value address.
Figure 27 A and Figure 27 B are the auxiliary views of explaining second example.In above-mentioned first example, obtain integer precision minimum value table element tm and be a plurality of integer precision table elements at center with integer precision minimum value table element, for example, 4 * 4=16 the table element (referring to the filling part among Figure 27 A) in the example of Figure 27 A and 27B.
Then, shown in Figure 27 B, as first example, with the hunting zone of reference frame in dwindle in the corresponding scope of dwindling reference vector of SAD epiphase, position with target frame is reference position (0,0), considers that horizontal direction and vertical direction (corresponding to dwindling reference vector) go up the axle vx/n and the axle vy/n of side-play amount, the sad value axle is considered to the axle vertical with axle vy/n with axle vx/n, and hypothesis forms coordinate space by three axles.
Next, use the sad value of four table elements on the horizontal direction that comprises integer precision minimum value table element tm in 16 table elements, in coordinate space, generate the cubic curve 206 on the horizontal direction, wherein, 16 table elements are the table element around integer precision minimum value table element tm and the integer precision minimum value table element tm.The horizontal direction position of decimal precision minimum value table element position is measured as the horizontal direction position vx/n corresponding to the minimum value of cubic curve on the horizontal direction 206.
Next, use the sad value of four table elements on the vertical direction that comprises integer precision minimum value table element tm in 16 table elements, in coordinate space, generate the cubic curve 207 on the vertical direction, wherein, 16 table elements are the table element around integer precision minimum value table element tm and the integer precision minimum value table element tm.The vertical direction position of decimal precision minimum value table element position is measured as the vertical direction position vy/n corresponding to the minimum value of cubic curve on the vertical direction 207.
The horizontal direction position and the vertical direction position probing decimal precision minimum value table element position (decimal precision minimum value table address) 208 that obtain according to above-mentioned processing by above-mentioned decimal precision minimum value table element position.Then, as above-mentioned shown in Figure 180, make corresponding to the vector (minimum value vector) 209 that detects decimal accuracy table element position 208 and multiply by n, thereby obtain to have the motion vector of original amplitude precision.
That is, second example is to determine four table elements in horizontal direction and the vertical direction each by the method described in first example, and determines the method for the cubic curve on horizontal direction shown in Figure 27 B and in the vertical direction each uniquely.
In this case, calculating is as follows corresponding to the method for the position 208 of the minimum value 202 of sad value cubic curve 206 and 207.The sad value that makes near four some places horizontal direction and the vertical direction cubic curve minimum value on one of them is for one of them the S of order of along continuous straight runs or vertical direction 0, S 1, S 2, and S 3, be used to calculate that decimal component u comprises decimal precision minimum value with the equation that obtains minimum value according to which of shown in Figure 28 three interval Ra, Rb and Rc and different.
In this case, interval Ra is sad value S 0Position and sad value S 1The position between the interval.Interval Rb is sad value S 1Position and sad value S 2The position between the interval.Interval Rc is sad value S 2Position and sad value S 3The position between the interval.
When decimal precision minimum value appears among the interval Ra shown in Figure 28, calculate the decimal component u of the skew of conduct by (equation E) among Figure 29 from minimum value to the integer precision minimum value.
Similarly, when decimal precision minimum value appears among the interval Rb shown in Figure 28, calculate the decimal component u of the skew of conduct by (equation F) among Figure 29 from minimum value to the integer precision minimum value.
In addition, when decimal precision minimum value appears among the interval Rc shown in Figure 28, calculate the decimal component u of the skew of conduct by (equation G) among Figure 29 from minimum value to the integer precision minimum value.
Determine that among shown in Figure 28 three interval Ra, Rb and the Rc which comprises that decimal precision minimum value is as follows.
Figure 30 A, Figure 30 B, Figure 30 C and Figure 30 D are the diagrammatic sketch that aid illustration should be determined.At first, shown in Figure 30 A, Figure 30 B and Figure 30 C, detect the minimum value Smin and the inferior small integer precision sad value Sn2 of integer precision sad value, and decimal precision minimum value is measured as the value that occurs between the position of position and inferior small integer precision sad value Sn2 of detection minimum value Smin of integer precision sad value.Next, according to sad value S shown in Figure 28 0, S 1, S 2, and S 3The position in minimum sad value Smin of integer precision and time shared position of small integer precision sad value Sn2, determine that among interval Ra, Rb and the Rc which is between detection zone.
Incidentally, in the position of sad value, and be positioned under the situation of four table element value ends shown in Figure 30 D, determine to assess minimum position at the minimum sad value Smin of integer precision.In the present embodiment, this situation is regarded as mistake, and does not carry out the calculating of minimum value position.Certainly, even still can the calculated minimum position under the situation of Figure 30 D.
As mentioned above, according to present embodiment, can use it to show the scaled 1/n that is 2The undersized SAD of dwindling show to calculate the motion vector of original image ratio.Even showing, Figure 31 using it to show the scaled 1/n that is 2The situation of the undersized SAD of dwindling table under also obtained basic similarly vector detection result with existing result.
The transverse axis of Figure 31 is illustrated in the reduce in scale factor on one of them the one dimension direction of horizontal direction and vertical direction.The longitudinal axis represents to detect the error (vector error) of motion vector.The numeric representation of vector error is a pixel count among Figure 31.
In Figure 31, curve 301 expressions are with respect to the vector error mean value of reduce in scale factor.Curve 302 expressions are with respect to 3 times of values (3 σ (99.7%)) of the vector error variances sigma of reduce in scale factor.Curve 303 is the curve of approximation that are similar to curve 302.
Figure 31 shows the vector error with respect to the reduce in scale factor n on the one dimension direction.Because SAD table is a bivariate table, therefore with square size (list cell prime number) of dwindling table of reduce in scale factor.On the other hand, vector error substantially only increases linearly.So, be appreciated that serviceability according to the method for present embodiment.
In addition, even reduce in scale factor n=64 (minification is 1/64), vector error is also very little, and the mistake of diverse motion vector can not occur calculating and exporting.Therefore, can think and the size of SAD table can be contracted to 1/4096 effectively.
In addition, as mentioned above, tremble correction, be starved of reducing of real-time performance and system delay at the hand that is used for moving image, and for precision, unless detect the diverse motion vector of failure, otherwise can tolerate vector detection error to a certain degree.Therefore, can think, can dwindle the size of SAD table greatly and can not cause that wrong present embodiment is very useful.
As mentioned above, present embodiment is divided into a plurality of zones (being 16 zones in this example) with reference frame 102, and detects the motion vector (every block motion vector) 205 in each zoning.As mentioned above, this is because the possibility that the motion object is included in the frame is very big, for example make detected as 16 motion vectors in the reference frame 102 of Figure 32 205 and by statistical disposition, consider the change of this motion vector 205 in the past frame simultaneously, thereby can detect an overall motion vector (that is, the hand of this frame is trembled vector) of a frame.
In this case, shown in figure 32, in first detects, setting have expectation as hunting zone SR1, SR2 .... and hunting zone SR1, the SR2 of the reference position PO1 to PO16 of 16 motion vectors 205 detecting of each center of SR16 .... and SR16, and in each hunting zone the hypothetical target piece projection image block IB1, IB2 ... and IB16.
Then, setting have with projection image block IB1, IB2 ... with the reference block of the identical size of IB16, each hunting zone SR1, SR2 .... and the reference block that move to be provided with in the SR16, generate as mentioned above and dwindle the SAD table, and detect each hunting zone SR1, SR2 .... and the motion vector 205 in the SR16.Therefore, in the present embodiment, SAD table TBLi has the structure of dwindling the SAD table.
Then, as shown in Figure 2, in the present embodiment, be arranged in 16 hunting zones 16 that object block is obtained and dwindle the SAD table so that it piles up each other.The sad value that the reference block locations (that is, dwindling identical coordinate position in the SAD table) that corresponds to each other in the hunting zone is located adds up to, to obtain to add up to sad value.Then, generate for the total of a plurality of reference block locations in the hunting zone and dwindle the SAD table, as comprising the SAD table that adds up to sad value.Therefore, in the present embodiment, add up to SAD table SUM_TBL to have and add up to the structure of dwindling the SAD table.
Then, in the present embodiment, SAD table TBLi is dwindled in use, be used for the motion vector 205 that above-mentioned approximate interpolation is handled acquisition by dwindling SAD table TBLi, and add up to SAD to show SUM_TBL, the reliability of carrying out is as shown in Figure 10 and Figure 11 determined to handle, obtain the generation that adds up to SAD table RSUM_TBL again of the object block of the every block motion vector of high reliability, and use in add up to again the minimum sad value among the SAD form RSUM_TBL and the curve approximation interpolation of a plurality of sad values adjacent that generate and handle, thereby calculate high-precision overall motion vector with minimum sad value.
The calculating of describing in the patent documentation 3 that is described as existing method is with the method for the motion vector that dwindles the size that conversion produced of image, and the image processing method that dwindles the SAD table according to the use of above-mentioned present embodiment has following two visibly different advantages.
At first, different with the existing method of describing in the patent documentation 3, according to the method for present embodiment fully not the needs image dwindle the processing of conversion.In method, when the sad value that reference block is calculated is assigned with and makes it with SAD table (dwindling SAD shows) addition, carry out address mapping simultaneously corresponding to the reduce in scale factor according to present embodiment.
Therefore, the method according to present embodiment has the following advantages: eliminated as being used for logic demand that image dwindles conversion in the existing method of describing in patent documentation 3, being used for storing downscaled images into the time and the bandwidth consumption of memory and the zone of having guaranteed to paste the memory of downscaled images.
As mentioned above, another major issue of the prior art of describing in the patent documentation 3 is to have the low pass filter that is used for removing aliasing (aliasing distortion) and low-light (level) noise when image dwindles conversion.That is, when downscaled images, need after image is by suitable low pass filter, carry out resampling to image.Otherwise the aliasing that can occur not expecting, it has reduced to use the precision of downscaled images calculating kinematical vector greatly.
Illustrated in theory and dwindled that the characteristic of ideal low-pass filter is the function that is similar to the sinc function in the conversion.The sinc function itself is unlimited tap (tap) FIR (finite impulse response (FIR)) filter with f/2 cut-off frequency, and this filter can be expressed as sin (x π)/(x π).The low pass filter that has desirable cut-off frequency f/ (2n) at reduce in scale factor 1/n place is represented as sin (x π/n)/(x π/n).Yet, it can be thought a kind of form of sinc function.
The top of Figure 33~Figure 35 shows the form of the sinc function (ideal characterisitics of low pass filter) when the reduce in scale factor is 1/2,1/4 and 1/8 respectively.From Figure 33 to Figure 35, obviously find out, expand on the direction that is increased in the tap axle of function along with the reduce in scale factor.That is,, also need to increase the tap number of FIR filter even can think when the time only by the approximate unlimited tap sinc function of main coefficient.
In addition, the tap number of the known usually filter of realizing cut-off frequency in lower band rather than filter form have the leading influence to performance of filter.
Therefore, as the existing method of describing in the patent documentation 3, use the method for downscaled images calculating kinematical vector also to comprise compromise (trade-off), although the reduce in scale factor of image is big more, this method SAD table to dwindle effect good more, but the reduce in scale factor of image is big more, and is high more as the cost of the low pass filter of the pretreated filter that is used to generate image.
Usually, when realizing high-order tap FIR filter, therefore square increase pro rata of the cost of operation logic and tap number goes wrong; Yet bigger problem is to be used to realize the increase of the line memory quantity of vertical filter.In digital camera in recent years,, carry out so-called striped and handle to reduce size as the line memory of pixel count along with the increase of pixel count.Even reduced the size of every line, but the increase of the number of line memory own still can be increased in the total cost that the physical layout area aspect is calculated greatly.
The image downscaling method that is appreciated that the existing method of describing the patent documentation 3 from top description especially faces very big obstacle in the realization of vertical low pass filters.On the other hand, the method according to present embodiment has solved this problem in a completely different way simply.
The bottom of Figure 33~Figure 35 shows the image according to low pass filter in the method for present embodiment.Method according to present embodiment does not comprise that image dwindles processing.Yet, in Figure 33~Figure 35, illustrated dwindle that SAD table generates and the processing of operation in the image of low pass filter.
Shown in the bottom of Figure 33~Figure 35, the characteristic of low pass filter is the characteristic of simple filter, wherein, linear approximation the main coefficient part of sinc function, tap number increases in the mode with reduce in scale factor interlock.As mentioned above, the ground that is true to life, along with the reduction of cut-off frequency, tap number has more leading influence to the performance of low pass filter.That is, distribute in the present embodiment and the processing of addition sad value (for example, distribute and the processing of addition) according to the execution linear weighted function of present embodiment itself be equal to by ball bearing made using realization with the high-performance low pass filter proportionality factor interlock.
Low pass filter has another advantage.The existing method of describing in the patent documentation 3 by make image through low pass filter and again sampled images come downscaled images.At this moment, lost quite a large amount of image informations.That is, in the operation of low pass filter, the word length of image information brightness value is rounded up, then it is stored in the memory, and after dwindling, the inessential bit of most of image informations is to not influence of image.
On the other hand, use all bit informations of all pixel brightness values to calculate sad value comparably, determine the distribution and the additive value of sad value, and should distribute and additive value adds to and dwindles SAD table according to the method for present embodiment.If only increase the word length of dwindling each table element value in the SAD table, then can carry out the calculating that does not always comprise approximation error to final sad value output.Because dwindle the area of the area of SAD table, there is not big problem so increase the word length of dwindling the SAD table less than frame memory.As a result, can done with high accuracy dwindle the detection of SAD table and motion vector.
[according to first embodiment of image processing apparatus of the present invention]
Next, with image pick-up device first embodiment of use according to the image processing apparatus of the image image processing method of present embodiment described as an example with reference to accompanying drawing.Fig. 1 illustrates the block diagram of conduct according to the image pick-up device example of image processing apparatus embodiment of the present invention.
In first embodiment of Fig. 1, the hand that applies the present invention to rest image is trembled corrective system.Incidentally, the invention is not restricted to be used for rest image, also be applicable to moving image in essence.Under the situation of moving image,, there is the frame number upper limit of addition each other owing to real-time performance.Yet,, can in the same way present embodiment be applied to generate the system of the moving image that produces by senior de-noising effect by this method being used for every frame.
The first embodiment input picture frame is set to reference frame, and detects the input picture frame and by the motion vector between the picture frame that input picture frame delay one frame in the frame memory is obtained.Then, superpose each other by a plurality of images (for example, 3fps image) that will take continuously and carry out hand simultaneously and tremble to proofread and correct and carry out the hand that is used for rest image among first embodiment and tremble correction.
Therefore, the superposeed image of a plurality of continuous shootings of first embodiment carries out hand to the rest image of taking simultaneously and trembles correction, thereby the precision that approaches pixel precision (pixel precision) is provided.As mentioned above, first embodiment not only detects as hand and trembles horizontal direction between the frame of motion vector and the translational component on the vertical direction, also detects the rotational component between the frame.
As shown in Figure 1, by CPU (CPU) 1 is connected to system bus 2, and image pickup signal treatment system 10, user are operated input unit 3, image storage unit 4, record and regenerating unit unit 5 etc. be connected to system bus 2, form image pick-up device according to the embodiment of the invention.Incidentally, suppose in this manual that CPU 1 comprises the ROM (read-only memory) that is used to store the program of carrying out various software processes, is used for the RAM (random access memory) of service area etc.
As will be described later, in response to the image pickup recording start operation of operating input unit 3 by the user, the view data that the image pick-up device record in Fig. 1 example picks up.In addition, pick up operation with document image, the captured image data that the image pick-up device regeneration in Fig. 1 example is write down in response to being used to begin to operate regeneration that input unit 3 carries out on the recording medium of record and regenerating unit unit 5 by the user.
As shown in Figure 1, shine image pick-up element 11 by camera optics system (not shown) with incident light, thereby carries out image is picked up from object with image pickup lens 10L.In this example, form image pick-up element 11 by CCD (charge coupled device) imager.Incidentally, can form image pick-up element 11 by CMOS (complementary metal oxide semiconductors (CMOS)) imager.
In the image pick-up device of this example, when carries out image is picked up when operating with recording start, carry out sampling according to timing signal from timing signal generation unit 12, thereby from the analog image pickoff signals of image pick-up element 11 outputs as the RAW signal in the Bayer configuration of three primary colors (that is, red (R), green (G) and blue (B)).To export the analog image pickoff signals and offer pretreatment unit 13, then the result be offered Date Conversion Unit 14 execution preliminary treatment (for example, defect correction, γ proofread and correct etc.).
The analog image pickoff signals that Date Conversion Unit 14 will input to Date Conversion Unit 14 is converted to the digital picture pickoff signals (YC data) that comprises illumination intensity signal component Y and color difference components Cb/Cr.Then, Date Conversion Unit 14 offers image storage unit 4 by system bus 2 with the digital picture pickoff signals.
Image storage unit 4 in Fig. 1 example comprises three frame memories 41,42 and 43.At first, will store in the frame memory 41 from the digital picture pickoff signals of Date Conversion Unit 14.Then, after through a frame, the digital picture pickoff signals that is stored in the frame memory 41 is transferred to frame memory 42, and will writes frame memory 41 from the digital picture pickoff signals of the new frame of Date Conversion Unit 14.Therefore, frame memory 42 is stored in the two field picture of the previous frame of the two field picture of storage in the frame memory 41.
Then, hand is trembled vector detection unit 15 by system bus two frame memories 41 of 2 visits and 42, to read the data that are stored in two frame memories 41 and 42.For example, as mentioned above, hand tremble vector detection unit 15 carry out to a frame generate 16 SAD tables processing, detect every block motion vector processing, generate the processing that adds up to the SAD table, generate processing that adds up to the SAD table again and the processing that detects overall motion vector, and carry out the translational movement that calculates frame and the processing of the anglec of rotation.
In this case, the two field picture that is stored in the frame memory 42 is the image of primitive frame, and the two field picture that is stored in the frame memory 41 is the image of reference frame.Incidentally, in fact, frame memory 41 and 42 is in turn as double buffering.
As mentioned above, the hand among first embodiment is trembled vector detection unit 15 and is used to dwindle SAD table and add up to SAD table redundant motion vector to detect in two or more stages and handle, and hunting zone and change the reduce in scale factor as required narrows down simultaneously.
In fact, tremble trembling in the treatment for correcting of vector, seldom there is strict restriction in real-time performance with hand at the hand that detects rest image, pixel count is bigger, and need to detect high-precision motion vector, therefore, the graded movement vector detection process in a plurality of stages is very effective.
Image storage unit 4 among first embodiment is provided with frame memory 43, is used to store rotation and a plurality of frames of translation afterwards with the result of a plurality of frame mutual superposition.As mentioned above, picture frame is superimposed upon (referring to the picture frame among Fig. 3 120) on first benchmark image.
Represented as dotted line among Fig. 1, also will rotate and translation after be superimposed with first reference frame of a plurality of frames view data be written to frame memory 43.
After being stored in second picture frame and subsequent image frames in the frame memory 41, use is stored in the view data in the frame memory 41, and hand is trembled vector detection unit 15 and detected second picture frames or subsequent image frames and tremble vector with the adversary mutually between the image of the previous frame of this picture frame always.At this moment, in conjunction with before hand tremble vector and calculate with respect to the hand of first benchmark image and tremble vector.In addition, hand is trembled 15 detections of vector detection unit with respect to second picture frame of the first reference map picture frame or the relative anglec of rotation of subsequent image frames.
Hand is trembled vector detection unit 15 and is provided the information of trembling vector and detecting the relative anglec of rotation with respect to each the detection phase adversary in second or the subsequent image frames of first picture frame to CPU 1.
Under the control of CPU 1, from frame memory 42, read second and the successive image that are stored in the frame memory 42, make the calculating phase adversary eliminated with respect to the benchmark image of first frame tremble component (translational movement component).Then, offer the peaceful phase shift of rotation and add unit 19 being stored in second and successive image in the frame memory 42.To tremble component according to the phase adversary offers from frame memory 42 by second and the successive image removing hand by (cutout) and tremble translational movement and rotates peaceful phase shift and add unit 19.
According to control signal from CPU 1, rotate peaceful phase shift and add unit 19 according to relative anglec of rotation rotation each from second and follow-up each picture frame that frame memory 42 reads with respect to the first reference map picture frame, and with second and subsequent image frames in each add to the picture frame that from frame memory 43, reads or picture frame asked average.To be written back to frame memory 43 as addition or the picture frame that is averaged the result.
Then, according to the control command of CPU 1, the image frame data in the frame memory 43 is cut off, so that it has predetermined resolution and predetermined image size.The result is provided to resolution conversion unit 16.Under the control of CPU 1, according to the control command of CPU 1, resolution conversion unit 16 generates and exports the view data with predetermined resolution and predetermined image size.
NTSC (national television system committee) encoder 18 will be converted to the standard colour-video signal of NTSC system from the view data (removed hand and trembled component) of resolution conversion unit 16.Vision signal is offered the display 6 that forms electronic viewfinder, thereby the image of taking is presented on the display screen of watch-dog.
With monitoring display concurrently, make view data (removed hand and trembled component) carry out encoding process from resolution conversion unit 16, for example, coded modulation in coding decoder unit 17, provide it to record and regenerating unit unit 5 then, so that it is recorded on the recording medium (CD, hard disk etc. that for example, comprise DVD (digital universal disc) for example).
Begin operation according to the regeneration of operating input unit 3 by the user, the captured image data of reading and recording on the recording medium of record and regenerating unit unit 5 provide it to coding decoder 17 then with the decoding of regenerating.Offer display 6 by NTSC encoder 18 decode image data of will regenerating, thereby reproduced picture is presented on the display screen of display 6.Incidentally, although not shown among Fig. 1, can will outwards export by video output terminals from the vision signal of NTSC encoder 18.
Above-mentioned hand is trembled vector detection unit 15 and can be formed or form by use DSP (digital signal processor) by hardware.In addition, to tremble vector detection unit 15 can be software processes by CPU1 to hand.In addition, hand is trembled the combination that vector detection unit 15 can be the software processes of hardware, DSP processing and CPU 1.
Hand is trembled vector detection unit 15 can only calculate relative every block motion vector between frame and the overall motion vector, and can carry out by CPU 1 and calculate relative high accuracy overall motion vector, translational movement and the anglec of rotation and calculate the translational movement and the anglec of rotation with respect to first frame.
Incidentally, the peaceful phase shift of rotation adds unit 19 and can carry out three kinds of frame addition processing methods in the present embodiment, that is, and and as " simple addition ", " the average addition of describing after a while " and " contest addition (tournament addition) ".The user operates input unit 3 to have and selects and assigned operation device (not shown in figure 1), is used to specify a kind of in three kinds of frame addition processing methods.CPU 1 is by selecting and the assigned operation device will offer corresponding to the selection control signal of user's selection and appointment and rotate peaceful phase shift and add unit 19.Rotating peaceful phase shift adds unit 19 and carries out by from the frame addition processing method in three kinds of frame addition processing methods of the selection control signal appointment of CPU 1.
[hand is trembled the processing operation in the vector detection unit 15]
[first example]
Tremble first example of handling operating process in the vector detection unit 15 with reference to the hand in the flow chart description present embodiment of Figure 36~Figure 39 below.In first example, according to the overall motion vector calculating translational movement and the anglec of rotation of reference frame.
Incidentally, Figure 36~Figure 39 represents the processing to a reference frame, and each frame is all carried out the processing procedure of Figure 36~Figure 39.In this case, first reference frame is carried out in step S31 in first detects, the processing of hunting zone is set after, can omit to after the processing of reference frame.
To describe first below detects.Be set to the center of each hunting zone by the center of object block, the skew of the hunting zone of 16 hunting zones of 16 object block shown in above-mentioned Figure 32 is set to zero, and the hunting zone maximum magnitude (the step S31 among Figure 36) that is set to suppose in the present embodiment.
Next, in the hunting zone that 16 object block are provided with respectively, carry out the processing (step S32) that aforementioned calculation is dwindled SAD table and monolithic motion vector.Will be described later the details of the handling process of step S32.
Generate after 16 object block dwindle the SAD table finishing, dwindle in the SAD form at 16, the sad value at the reference block locations place that will correspond to each other in the hunting zone by (equation 3) shown in Fig. 4 adds up to, and has as the total of a plurality of reference block locations in the hunting zone of the identical size of dwindling the SAD table and dwindles SAD table (step S33) thereby generate its table.
Next, dwindle the minimum sad value of detection in the SAD table in the total that generates.Use minimum sad value that detects and a plurality of sad values that close on minimum sad value, add up to motion vector (step S34) by carrying out above-mentioned approximate curved surface interpolation processing calculating.
Then, as benchmark, based on 16 sad value and monolithic motion vectors that dwindle the SAD table, the condition shown in Figure 10 of carrying out is determined by the total motion vector that calculates among the step S34.As mentioned above, thus to 16 object block each dwindle SAD list notation " TOP ", " NEXT_TOP ", " NEAR_TOP " or " OTHERS ".In addition, calculate the gross score sum_score of described reference frame.Then, the result (step S35) who keeps the gross score sum_score of mark and calculating.Incidentally, at this moment, the object block of mark " NEAR_TOP " and " OTHERS " is provided with interruption masking sign (maskflag), the object block reliability of interruption masking sign expressive notation " NEAR_TOP " and " OTHERS " low so its be not used.
Next, carry out majority decision (step S36) based on 16 monolithic motion vectors that in step S32, calculate.Determine peaked monolithic motion vector as benchmark by majority, determine based on 16 sad value and monolithic motion vector conditions shown in Figure 10 of carrying out of dwindling the SAD table.As mentioned above, thus to 16 object block each dwindle SAD list notation " TOP ", " NEXT_TOP ", " NEAR_TOP " or " OTHERS ".In addition, calculate the gross score sum_score of described reference frame.Then, the result (step S37) who keeps the gross score sum_score of mark and calculating.
Then, whether consistent each other the total motion vector that will calculate in step S34 determines peaked motion vector to compare each other with the majority that is detected as the majority among step S36 decision result, to determine dwindling coordinate position in the SAD table or (coordinate position directly adjacent to each other on vertical direction, horizontal direction or diagonal) (step S38) directly adjacent to each other at two motion vectors.
When in step S38, determine adding up to motion vector inconsistent each other or each other not during direct neighbor with the peaked motion vector of most decisions, the overall motion vector of determining described reference frame is unreliable, tremble the frame of the overlap-add procedure of correction and remove described reference frame from being subjected to being used for the rest image hand, and skip following processing (step S39).Then, processing procedure finishes.
When in step S38, determine adding up to motion vector to determine peaked motion vector consistent each other or directly adjacent to each other the time, determine then whether the gross score sum_score that obtains is equal to or greater than the predetermined threshold θ th1 that sets in advance and whether the gross score many_score that obtains is equal to or greater than the predetermined threshold θ th2 (the step S41 among Figure 37) that sets in advance in step S35 in step S37 with most.
When gross score sum_score in step S41 is equal to or greater than threshold value θ th1 and total points many_score and is equal to or greater than one or two condition among the threshold value θ th2 and does not satisfy, processing proceeds to step S39, wherein, tremble the frame of the overlap-add procedure of correction and remove described reference frame from being subjected to being used for the rest image hand, and skip following processing.Then, processing procedure finishes.
When in step S41, all satisfying gross score sum_score and be equal to or greater than threshold value θ th1 and gross score many_score and be equal to or greater than two conditions of threshold value θ th2, only use corresponding to object block and the sad value that in step S35, provides the SAD table of " TOP " and " NEXT_TOP " mark in the SAD table of mark and recomputate the total sad value, and recomputate to add up to and dwindle SAD and show (step S42).
Then, use the coordinate position of minimum sad value and, carry out the processing (step S43) of approximate curved surface interpolation by recomputating the sad value that adds up to coordinate position place adjacent in the SAD table again of acquisition with the coordinate position of minimum sad value.In this example, with reference to the table element in the rectangular area of Figure 20 A described 3 * 3, the approximate curved surface interpolation among the execution in step S43 is handled above using.
Then, the motion vector that detects as approximate curved surface interpolation result is guaranteed to and will be used to be provided with the overall motion vector (step S44) of scope skew in second detects.
Next, hand is trembled 15 continuation of correct detection unit and is carried out second detection shown in Figure 38 and Figure 39.
As shown in Figure 12B, 16 hunting zones of 16 object block are set to obtain in first detects and in the offset of the overall motion vector that keeps in step S39 (promptly, pass through translational movement) as the scope at its center, and the scope (step S51 in Figure 38) of this scope in detecting less than first.
Next, in the hunting zone that is respectively 16 object block settings, carry out the processing (step S52) that aforementioned calculation is dwindled SAD table and monolithic motion vector.
In step S52, finish after the generation of dwindling the SAD table of a plurality of object block, dwindling in the SAD table of the object block with mark " TOP " and " NEXT_TOP ", the sad value that adds up to the reference block locations place that corresponds to each other in the hunting zone by (equation 3) shown in Figure 4, remove first and be provided with the object block of interruption masking sign in detecting, thus generate its form have with dwindle the SAD epiphase with a hunting zone of size in the total of a plurality of reference block locations dwindle SAD and show (step S53).Incidentally, remove the object block that the interruption masking sign is set in first detects, can be only to calculating the processing of dwindling SAD table and monolithic motion vector among the object block execution in step S52 with " TOP " and " NEXT_TOP " mark.
Next, in step S53, be created on total and dwindle the minimum sad value that detects in the SAD table.Use the minimum sad value of detection and a plurality of sad values adjacent, handle calculating decimal precision total motion vector (step S54) by carrying out above-mentioned approximate curved surface interpolation with minimum sad value.
Next, as benchmark, based on the sad value and the monolithic motion vector that dwindle the SAD table that the object block of interruption masking sign is not set in first detects, the condition shown in Figure 10 of carrying out is determined by the total motion vector that will calculate in step S54.As mentioned above, thus above-mentioned each object block dwindled SAD list notation " TOP ", " NEXT_TOP ", " NEAR_TOP " or " OTHERS ".In addition, recomputate the gross score sum_score of described reference frame.Then, the result (step S55) who keeps mark and calculating total points sum_score.Incidentally, equally at this moment, the object block of " NEAR_TOP " and " OTHERS " is provided with interruption masking sign to new mark, interruption masking sign expressive notation the reliability of object block of " NEAR_TOP " and " OTHERS " very low, so will can not use this object block.
Next, carry out majority decision (step S56) based on the monolithic motion vector that the object block of interruption masking sign is not set in the monolithic motion vector that is calculated among the step S52 in first detects.By determining peaked monolithic motion vector as benchmark as the majority of majority rule fixed structure, based on the sad value and the monolithic motion vector that dwindle the SAD table of the object block that the interruption masking sign is not set, the condition shown in Figure 10 of carrying out is determined.As mentioned above, thus dwindle SAD list notation " TOP ", " NEXT_TOP ", " NEAR_TOP " or " OTHERS " for above-mentioned each object block.In addition, calculate the gross score many_score of described reference frame.The result (step S57) who keeps the gross score many_score of mark and calculating.
Then, whether consistent each other the total motion vector that will calculate in step S54 determines peaked motion vector to compare each other with the majorities that detect as most decision results among the step S56, to determine the coordinate position of two motion vectors in dwindling the SAD table or (coordinate position directly adjacent to each other on vertical direction, horizontal direction or diagonal) (step S58) directly adjacent to each other.
When in step S58, determining to add up to motion vector and majority to determine that peaked motion vector is inconsistent each other or directly contiguous each other, the overall motion vector of determining described reference frame is unreliable, tremble the described reference frame of frame removal of the overlap-add procedure of correction from being used for the rest image hand, and skip following processing (step S59).Then, processing procedure finishes.
When in step S58, determining to add up to motion vector consistent each other with the peaked motion vector of most decisions or directly contiguous each other, determine whether the gross score sum_score that obtains is equal to or greater than the predetermined threshold θ th3 that sets in advance and whether the gross score many_score that obtains is equal to or greater than the predetermined threshold θ th4 (the step S61 among Figure 39) that sets in advance in step S55 in step S57.
When gross score sum_score in step S61 is equal to or greater than threshold value θ th3 and gross score many score and is equal to or greater than one or two condition among the threshold value θ th4 and does not satisfy, processing proceeds to step S59, wherein, tremble the described reference frame of frame removal of the overlap-add procedure of correction from being used for the rest image hand, and skip following processing.Then, processing procedure finishes.
When in step S61, all satisfying gross score sum_score and be equal to or greater than threshold value θ th3 and gross score many_score and be equal to or greater than two conditions of threshold value θ th4, only use corresponding to object block and the sad value that in step S55, provides the SAD table of " TOP " and " NEXT_TOP " mark in the SAD table of mark and recomputate the total sad value, and recomputate to add up to and dwindle SAD and show (step S62).
Then, use the coordinate position of minimum sad value and by recomputating the sad value that adds up to coordinate position place adjacent in the SAD table again of acquisition with the coordinate position of minimum sad value, carry out the processing of approximate curved surface interpolation, and calculate and keep the total motion vector (step S63) of motion vector as a whole.In this example, with reference to the table element in the rectangular area of Figure 20 A described 3 * 3, approximate curved surface interpolation is handled among the execution in step S63 above using.
Then, based on the total motion vector that calculates, determine relative translation amount, and the translational movement of determining by addition calculates the translational movement (step S64) of described frame with respect to first frame with respect to the rest image of the described frame of frame before tight.
Next, to be calculated as the anglec of rotation between the total motion vector of the described frame that the total motion vector of similar detection of frame before tight and maintenance and total motion vector are detected with respect to the relative anglec of rotation of the rest image of the described frame of frame before tight in step S63, and calculate the anglec of rotation (step S65) of described frame with respect to first frame by the anglec of rotation of addition calculation.
By finishing above-mentioned processing, hand trembles that vector detection unit 15 finishes because hand is trembled caused is the processing that the unit calculates the translational movement and the anglec of rotation with the frame, then, will offer CPU1 as the translational movement and the anglec of rotation of result of calculation.Then, use the translational movement and the anglec of rotation, rotate peaceful phase shift and add unit 19 described frame is superimposed upon on first frame as result of calculation.
Incidentally, in description before, in step S64 and step S65, also calculate the translational movement and the anglec of rotation with respect to first frame.Yet, only can in step S64 and step S65, carry out with respect to the relative translation amount of frame before tight and the calculating of the relative anglec of rotation, and CPU1 can calculate the translational movement and the anglec of rotation with respect to first frame.
When finishing above-mentioned processing, hand is trembled in the vector detection unit 15 the processing EO to a reference frame.
Incidentally, in the flow chart of Figure 36 and Figure 37 and the step S31 in the flow chart of Figure 38 and Figure 39 can tremble vector detection unit 15 to the processing of step S34 and step S51 by hand to the processing of step S54 and carry out, and processing subsequently can be carried out by software by CPU1.
In addition, tremble in the process of vector (overall motion vector) at the detection hand, can be with the method combination with one another of the above-mentioned processing method of guaranteeing overall motion vector, with further improvement reliability and precision with the frequency predication overall motion vector of the existing motion vector of proposing from the time-axis direction.
In addition, in above-mentioned example, only use the sad value generation of dwindling the SAD table of the piece that in step S35 or step S55, provides " TOP " and " NEXT_TOP " mark in the tag block to add up to the SAD table again.Yet, can only use the sad value generation of dwindling the SAD table of the piece that in step S37 or step S57, provides " TOP " and " NEXT_TOP " mark in the tag block to add up to the SAD table again.In addition, can use the sad value generation of dwindling the SAD table of the piece that in step S35 or step S55 and step S37 or step S57, provides " TOP " and " NEXT_TOP " mark in the tag block to add up to the SAD table again.
In addition, in above-mentioned example, be used as a standard of the overall motion vector assessment of the reference frame that is used for determining calculating kinematical vector corresponding to the gross score sum_score of the mark that the monolithic motion vector is provided and gross score many_score.Yet, replace gross score, whether the number that provides the monolithic motion vector of " TOP " and " NEXT_TOP " mark can be equal to or greater than predetermined threshold as determining evaluating standard, therefore when the monolithic motion vector number with " TOP " and " NEXT_TOP " mark is equal to or greater than predetermined threshold, determine overall motion vector is provided high assessed value.
[second example]
Tremble second example of the processing operating process in the vector detection unit 15 with reference to hand in the flow chart description present embodiment of Figure 40~Figure 42 below.In second example, only use the monolithic motion vector of high reliability in the monolithic motion vector of reference frame, calculate the translational movement and the anglec of rotation of reference frame by the top method of describing with reference to Fig. 5~Fig. 8 E.
The processing of Figure 40~Figure 42 is the processing for a reference frame equally, and each reference frame is carried out the processing procedure of Figure 40~Figure 42.In this case, a reference frame has been carried out among the step S71 in first detects, the processing of hunting zone is set after, can body slightly to after the processing of reference frame.
To describe first below detects.Be set to the center of each hunting zone by the center of object block, hunting zone skew as 16 hunting zones of above-mentioned 16 object block shown in Figure 32 is set to zero, and the hunting zone maximum magnitude (the step S71 among Figure 40) that is set to suppose in the present embodiment.
Next, in respectively to the hunting zone of 16 object block settings, carry out the processing (step S72) that aforementioned calculation is dwindled SAD table and monolithic motion vector.The details of step S72 handling process will be described after a while.
Finish 16 object block are generated dwindle the SAD table after, dwindle in the SAD table at 16, the sad value that adds up to the reference block locations place that corresponds to each other in the hunting zone by (equation 3) shown in Figure 4, thereby the total of a plurality of reference block locations in generating its table and dwindling the hunting zone that SAD table has identical size is dwindled SAD and is shown (step S73).
Next, dwindle the minimum sad value of detection in the SAD table in the total that generates.The minimum sad value that use to detect and a plurality of sad values of adjacent minimum sad value are handled to calculate and are added up to motion vector (step S74) by carrying out above-mentioned approximate curved surface interpolation.
Next, as benchmark, based on 16 sad value and monolithic motion vectors that dwindle the SAD table, the condition shown in Figure 10 of carrying out is determined by the total motion vector that will calculate in step S74.As mentioned above, thus to 16 object block each dwindle SAD list notation " TOP ", " NEXT_TOP ", " NEAR_TOP " or " OTHERS ".At this moment, to being marked with NEAR_TOP " and the object block of " OTHERS " the interruption masking sign is set, it is to hang down reliability that interruption masking sign expressive notation has the object block of " NEAR_TOP " and " OTHERS ", so it is not used (step S75).
Next, whether determine to provide the number of object block of " TOP " mark less than the predetermined threshold θ th5 (step S76) that sets in advance.When the number of the object block of determining to provide " TOP " mark during less than threshold value θ th5, whether the number of object block of determining to provide " NEXT_TOP " mark is less than the predetermined threshold θ th6 (step S77) that sets in advance.
When the number of the object block of in step S77, determining to provide " NEXT_TOP " mark during, tremble the frame of the overlap-add procedure of correction and remove described reference frame from being used for the rest image hand, and skip following processing (step S78) less than threshold value θ th6.Then, processing procedure finishes.
When the number of the object block of determining to provide " TOP " mark in step S76 is equal to or greater than threshold value θ th5, perhaps when the number of the object block of determining to provide " NEXT TOP " mark in step S77 is equal to or greater than threshold value θ th6, to the object block that is marked with " TOP " and " NEXT_TOP " (the interruption masking sign is not set) dwindle the SAD table carry out before with reference to Figure 17, Figure 20 A and Figure 20 B, Figure 23 A and Figure 23 B, or the approximate curved surface interpolation processing of Figure 27 A and Figure 27 B description, thereby the monolithic motion vector (step S79) of calculating high accuracy (decimal precision).
Next, as top described, only use the highly reliable monolithic motion vector that in step S79, calculates to calculate with respect to the translational movement of the described frame of frame (step S80) before with reference to Fig. 5 and Fig. 6.The translational movement that calculates in the step is corresponding to the overall motion vector in first example before.This translational movement is used to be provided with the hunting zone skew in second detection.Therefore, first detect the processing end.
Next, hand is trembled 15 continuation of correct detection unit and is carried out second detection shown in Figure 41 and Figure 42.
Shown in Figure 12 B, 16 hunting zones of 16 object block are set to have the scope of the skew of the translational movement that will obtain in first detection among the step S80 and keep as its center, and this scope is narrower than first scope (the step S81 among Fig. 4 1) that detects.
Next, in respectively to the hunting zone of 16 object block settings, carry out the processing (step S82) that aforementioned calculation is dwindled SAD table and monolithic motion vector.
In completing steps S82, a plurality of object block are generated and dwindle after the SAD table, dwindling in the SAD table (removing the object block that the interruption masking sign is set in first detection) of the object block with mark " TOP " and " NEXT_TOP ", add up to the sad value at the reference block locations place that corresponds to each other in the hunting zone by (equation 3) shown in Fig. 4, thereby the total of a plurality of reference block locations in generating its table and dwindling the hunting zone that the SAD table has identical size is dwindled SAD and is shown (step S83).Incidentally, remove the object block that the interruption masking sign is set in first detection, can be only to the computing of dwindling SAD table and monolithic motion vector among the object block execution in step S82 with " TOP " and " NEXT_TOP " mark.
Next, the total that generates in step S83 is dwindled in the SAD table and is detected minimum sad value.Use the minimum sad value of detection and a plurality of sad values adjacent, handle calculating decimal precision total motion vector (step S84) by carrying out above-mentioned approximate curved surface interpolation with minimum sad value.
Next, as benchmark, the sad value and the monolithic motion vector that dwindle the SAD table of the object block of interruption masking sign is not set in detecting based on first by the total motion vector that will calculate among the step S84, the condition shown in Figure 10 of carrying out is determined.Thus, as mentioned above, above-mentioned each object block dwindled SAD list notation " TOP ", " NEXT_TOP ", " NEAR_TOP " or " OTHERS ".In addition, the object block of mark " NEAR_TOP " and " OTHERS " is provided with the interruption masking sign, the reliability of the object block of interruption masking sign expressive notation " NEAR_TOP " and " OTHERS " is very low, so it will not be used (step S85).
Next, whether determine not to be provided with the number of object block of interruption masking sign less than the predetermined threshold θ th7 (step S86) that sets in advance.When the number of the object block that interruption masking sign is not set during, tremble the frame of the overlap-add procedure of correction and remove described reference frame from being used for the rest image hand, and skip following processing (step S87) less than predetermined threshold θ th7.Then, processing procedure finishes.
When the number of the object block of determining not to be provided with the interruption masking sign in step S86 is equal to or greater than when setting in advance definite predetermined threshold θ th7, the SAD table that dwindles to the object block of mark " TOP " and " NEXT_TOP " (piece not being provided with the interruption masking sign) is handled with reference to Figure 17, Figure 20 A and Figure 20 B, Figure 23 A and Figure 23 B or Figure 27 A and the described approximate curved surface interpolation of Figure 27 B before carrying out, thereby calculates high accuracy (decimal precision) monolithic motion vector (step S88).
Next, with reference to as described in Fig. 5 and Fig. 6, only use the highly reliable monolithic motion vector computation in step S88, calculated as before with respect to the translational movement of the described frame of frame (α, β) (the step S91 among Figure 42) before.
In addition, with reference to as described in Fig. 6~Fig. 8 E, only use the highly reliable monolithic motion vector computation in step S88, calculated as before with respect to the anglec of rotation of the described frame of frame (γ) (step S92) before.
Next, (α β) and the anglec of rotation that obtains (γ), calculates the desirable monolithic motion vector of each object block in step S92 based on the translational movement that obtains in step S91.Calculate desirable monolithic motion vector and actual to the error E RRi between the monolithic motion vector Vi of each object block calculating, and the summation ∑ ERRi (step S93) of the error of calculation.Can be by the summation ERRi of (equation H) error of calculation among Figure 43.The summation ∑ ERRi of error is the summation of the error of described frame.
Incidentally, about as described in (equation 6), the measured value of confirming to be trembled by the hand of a plurality of objects the anglec of rotation that component produces is very low, makes for spin matrix R cos γ ≈ 1 and sin γ ≈ γ as before.Therefore, can represent error E RRi as shown in figure 43.
Next, determine that whether the sum of the deviations Σ ERRi that calculates is less than the predetermined threshold θ th8 (step S94) that sets in advance in step S93.When definite summation was not less than predetermined threshold θ th8, the peaked object block of error E RRi with monolithic motion vector Vi of each object block of having calculated error to falling into a trap at step S93 was provided with interruption masking sign (step S95).
Then, after step S95, the step S83 among Figure 41 is returned in processing, wherein, in removal the dwindling in the SAD table of object block of the object block of interruption masking sign is set, add up to the sad value at the reference block locations place that corresponds to each other in the hunting zone by (equation 3) shown in Fig. 4, thereby the total of a plurality of reference block locations in generating its table and dwindling the hunting zone that the SAD table has identical size is dwindled SAD and is shown.Then, the following processing of repeating step S84.
As the sum of the deviations ∑ ERRi that in step S94, determines in step S93, to calculate during less than predetermined threshold θ th8, with the translational movement that calculates among step S91 and the step S92 (α, β) and the anglec of rotation (γ) be set to hand and tremble component.Then, second detect the processing end.
Then, hand is trembled vector detection unit 15 and will be offered CPU1 as the translational movement and the anglec of rotation of result of calculation.CPU1 is according to the translational movement and the anglec of rotation calculated as the translational movement of the result of calculation that receives and the anglec of rotation with respect to first frame.CPU1 is sent to the peaceful phase shift of rotation with translational movement and the anglec of rotation and adds unit 19.Then, rotating peaceful phase shift adds unit 19 and uses the translational movement and the anglec of rotation that receive to carry out the processing that is superimposed upon on first frame.
Incidentally, equally in second example, hand is trembled vector detection unit 15 can calculate the translational movement and the anglec of rotation with respect to first frame.
In addition, equally in this example, can tremble vector detection unit 15 by hand and carry out the processing of the step S71~step S74 among Figure 40 and the processing of the step S81 among Figure 41~step S84, and can utilize software execution processing subsequently by CPU1.
In addition, in above-mentioned example, will add up to motion vector, to determine the reliability of monolithic motion vector as overall motion vector.Yet, majority can be determined peaked motion vector as benchmark.
Incidentally, can in first detects, use the method for first example of representing among above-mentioned Figure 35, so that the second hunting zone skew that detects to be set based on the total motion vector of motion vector as a whole, and the method for second example represented among Figure 41 and Figure 42 can be used in second and to detect.
Promptly, all has very high precision owing to can not expect each the monolithic motion vector in first detection basically, so can determine the skew of second hunting zone of detecting based on the overall motion vector that from result, obtains to the piece coupling of object piece with mark " TOP " and " NEXT_TOP ", and the method for the acquisition translational movement of describing with reference to Fig. 5 and Fig. 6 before not using.
Usually, when hand is trembled when comprising rotational component, the method for describing with reference to Fig. 5 and Fig. 6 is effective equally with the method for the translational component of trembling with the high precision computation hand above.Yet, the best of this method use be second or detection afterwards in carry out, wherein, obtain high accuracy monolithic motion vector.
Incidentally, be better than after first detects, the second hunting zone skew that detects being set based on overall motion vector or translational movement, after first detects, can also calculate overall motion vector with before the anglec of rotation between the overall motion vector of frame or the anglec of rotation that obtains by the top anglec of rotation computational methods of describing with reference to Fig. 7 A~Fig. 8 E, make under the situation of also considering the anglec of rotation to be that the hunting zone that each object block is provided with second detection independently is offset.In this case, can limit the hunting zone more, therefore, can expect the improvement of precision and processing speed.
In the superincumbent description, in first detection and second detects, will be considered as effective monolithic motion vector near the monolithic motion vector that adds up to motion vector.Yet, in second detects, the monolithic motion vector of all object block except that the object block that the interruption masking sign is set can be considered as effective monolithic motion vector in first detects.This is because second detect to provide and have high-precision monolithic motion vector, and even can detect the rotational component that hand is trembled, therefore, these monolithic motion vectors do not need with the average aggregate motion vector similar.
It is very effective to rest image that above-mentioned hand is trembled vector detection process, and this is because under the situation of rest image, compares with moving image, provides the sufficient processing time but the precision of having relatively high expectations.For higher precision, replace above-mentioned twice detection, can carry out three times or more times detection.In this case, carry out before detecting the last time hunting zone with search skew dwindle and to the search of highly reliable monolithic motion vector, and in last detection, for example, as Figure 41 and the calculating translational movement shown in Figure 42 and the anglec of rotation.
[example of processing procedure among step S32, S52, S72 and the S82]
To be described being used for generating the processing procedure example that dwindles the SAD table and calculate each object block monolithic motion vector among the step S32 among Figure 36, step S52, step S72 among Figure 40 among Figure 38 and the step S82 among Figure 41 below.
<the first example 〉
Figure 44 and Figure 45 show first example that is used for calculating the processing procedure of dwindling SAD table and each object block monolithic motion vector among step S32, S52, S73 and the S83.
At first, specify reference vector corresponding to a reference block locations in the hunting zone SR shown in above-mentioned Figure 32 (vx, vy).As mentioned above, when the position of the object block in frame (center of hunting zone) was set to reference position (0,0), (vx, vy) expression was by the represented position of designated reference vector.Vx is the displacement component of designated reference vector distance reference position on the horizontal direction.Yx is the displacement component of designated reference vector distance reference position on the vertical direction.With above-mentioned existing example, displacement vx and vy are to be the value of unit with the pixel.
By with the center of hunting zone as reference position (0,0), when limiting the hunting zone on the horizontal direction, and when limiting hunting zone on the vertical direction, the hunting zone can be expressed as by ± Ry by ± Rx:
-Rx≤vx≤+Rx,-Ry≤vy≤+Ry
Next, coordinate (x, y) (the step S102) of a pixel in the intended target piece Io.Then, shown in above-mentioned (equation 1), calculate specified coordinate in the object block (x, the pixel value Io that y) locates (x, y) with reference block Ii in pixel value Ii (x+vx, y+vy) the difference absolute value α between (step S103) of corresponding position.
Then, in that (vx, vy) Biao Shi address (table element) difference absolute value α of locating to calculate carries out addition with previous sad value, and the sad value of addition gained is written back to this address (step S104) by the reference vector of reference block Ii.That is, make SAD (vx, vy) be corresponding to reference vector (vx, sad value vy) calculate this sad value by above-mentioned (equation 2), that is,
SAD(vx,vy)=∑α=∑|Io(x,y)-Ii(x+vx,y+vy)|......(2)
Then, this sad value is write by reference vector (vx, vy) Biao Shi address.
Next, determine whether that (x, the pixel of y) locating has been carried out the aforesaid operations (step S105) among step S102~step S104 to all coordinates in the object block Io.When determining that also (x during the pixel complete operation of y) locating, does not handle turning back to step S102, with next coordinate in the intended target piece Io (x, the pixel of y) locating, and the following processing of repeating step S102 to all coordinates in the object block Io.
The processing of step S1~S5 is identical in the above-mentioned processing of step S101~S105 and the flow chart of Figure 73.
In the present embodiment, when in step S105, determining to all the coordinate (x in the object block Io, when the pixel of y) locating has been carried out aforesaid operations, minification is set to 1/n, and calculate and pass through reference vector (vx, vy) vector (vx/n, vy/n) (step S106) are dwindled in the reference that is contracted to 1/n and is obtained.
Next, detect with reference to dwindling vector (vx/n, vy/n) adjacent a plurality of reference vectors (being four adjacent reference vectors as mentioned above in this example) (step S107).Then, as mentioned above, based on by with reference to the relation of dwindling between the position of vector representation and the position of representing by adjacent reference vector respectively, will be calculated as linear weighted function distribution value (step S108) as distribution and additive value corresponding to the table element of four adjacent reference vectors of detection according to the sad value that in step S104, obtains.Then, with four linear weighted function distribution values obtaining respectively with value addition (step S109) corresponding to the SAD table element of adjacent reference vector.
When step S109 finishes, determine whether to finish the calculating of the sad value of paying close attention to reference block.Then, determine whether that promptly, (vx vy) has finished step S101 and handled (the step S111 among Figure 45) to the aforesaid operations among the step S109 all reference vectors in the hunting zone to all reference blocks.
When in step S111, determine to exist the reference vector also not finishing aforesaid operations and handle (vx in the time of vy), handles turning back to step S101, be provided with the next reference vector not finishing aforesaid operations and handle (vx, vy), and the following processing of repeating step S101.
Then, (vx in the time of vy), determines to finish and dwindles the SAD table when determine not exist in the hunting zone reference vector also not finishing aforesaid operations and handle in step S111.The detection minimum sad value of finishing (step S112) in the SAD table that dwindles.
Next, use (mx, the sad value generation quadratic surface (step S113) of the minimum sad value of my) locating (minimum value) and a plurality of neighboar lists element (being 15 adjacent form elements as mentioned above in this example) in the table element address.Calculate minimum value vector (px, py) (the step S114) of expression corresponding to the decimal precision position of quadric minimum sad value.(px is py) corresponding to decimal precision minimum value table element address for this minimum value vector.
Then, (px py) multiplies each other the motion vector that calculating will obtain (px * n, py * n) (step S115) with n to the calculated minimum vector by will representing the decimal precision position.
Therefore, finish in the present embodiment the processing of passing through the motion vector detection that the piece coupling carries out to an object block.Under the situation of dwindling SAD table and motion vector of a plurality of object block (being 16 object block under this situation) that are calculated as a frame setting, each subject object piece changes all will reset hunting zone and reduce in scale factor 1/n, and the above-mentioned processing shown in Figure 44 and Figure 45 is repeated in each zoning.
Incidentally, needn't illustrate, can be with method minimum value vector (px, method py) that computational chart shows the decimal precision position that act on of the conic section on above-mentioned usage level direction and the vertical direction.
Incidentally, can tremble the processing that the step S101 to S111 among Figure 44 and Figure 45 is carried out in vector detection unit 15 by hand, and can be by CPU1 by software execution processing subsequently.
<the second example 〉
In first example before, be a reference block (reference vector) acquisition sad value, obtain to dwindle the distribution and the additive value of a plurality of reference vectors of vector according to sad value, and carry out and distribute and the addition processing adjacent to reference.
On the other hand, in second example, when detecting poor between each pixel in the reference block and the pixel in the object block, obtain to dwindle the distribution and the additive value (difference value rather than sad value) of the adjacent a plurality of reference vectors of vector according to difference value, and carry out the processing of distribution and difference value that addition obtains with reference.According to second example, when all pixels in the reference block are finished difference operation, generate and dwindle the SAD table.
Figure 46 and Figure 47 are the flow charts of handling according to the motion vector detection of second example.
Among Figure 46 among the processing of step S121~step S123 and Figure 44 the processing of step S101~step S103 identical, therefore will omit its detailed description.
In second example, in step S123, calculate coordinate (x between reference block and the object block, y) locate after the difference value α of pixel, the reduce in scale factor is set to 1/n, and calculate and pass through reference vector (vx, vy) vector (vx/n, vy/n) (step S124) are dwindled in the reference that is contracted to 1/n and is obtained.
Next, detect with reference to dwindling vector (vx/n, vy/n) adjacent a plurality of reference vectors (being four adjacent reference vectors as mentioned above in this example) (step S125).Then, as mentioned above, based on by with reference to the relation of dwindling between the position of vector representation and the position of representing by adjacent reference vector respectively, according to the difference value α that in step S123, obtains will as corresponding to the distribution of the table element of four adjacent reference vectors of detection and addition difference value as linear weighted function distribution value (difference value) (step S126).
Then, with four linear weighted function distribution values obtaining respectively with value addition (step S127) corresponding to the table element of adjacent reference vector.
After step S127 finishes, determine whether that (x, the pixel of y) locating has been carried out the aforesaid operations (step S128) among step S122~step S127 to all coordinates in the object block Io.When determining that also (x during the pixel complete operation of y) locating, does not handle turning back to step S122, with next coordinate in the intended target piece Io (x, the location of pixels of y) locating, and the following processing of repeating step S122 to all coordinates in the object block Io.
When determining that in step S128 (x when the pixel of y) locating has been carried out aforesaid operations, determines whether to finish the calculating to the sad value of paying close attention to reference block to all coordinates in the object block Io.Then, determine whether that (that is, all reference vectors (vx, vy)) have been finished aforesaid operations processing (the step S131 among Figure 47) to all reference blocks in the hunting zone.
Also do not carry out the reference vector (vx that aforesaid operations is handled when in step S131, determining to exist, vy) time, handle turning back to step S121, so that the reference vector of not finishing the aforesaid operations processing is provided with next reference vector (vx, and the following processing of repeating step S121 vy).
Then, (vx in the time of vy), determines to finish and dwindles the SAD table when determine not exist the reference vector also not finishing aforesaid operations and handle in the hunting zone in step S121.The detection minimum sad value of finishing (step S132) in the table that dwindles.
Next, (mx, the sad value of the minimum sad value of my) locating (minimum value) and a plurality of neighboar lists element (being 15 neighboar lists elements as mentioned above in this example) generate quadratic surface (step S133) to use the table element address.Calculate minimum value vector (px, py) (the step S134) of expression corresponding to the decimal precision position of the minimum sad value of quadratic surface.(px is py) corresponding to decimal precision minimum value table element address for this minimum value vector.
Then, (px py) multiplies each other the motion vector that calculating will obtain (px * n, py * n) (step S135) with n to the calculated minimum vector by will representing the decimal precision position.
Therefore, finish in second example object block to be carried out the processing of motion vector detection by the piece coupling.Under the situation of the motion vector that dwindles SAD table and a plurality of object block (or being 16 object block under this situation) that is calculated as a frame setting, each subject object piece that changes all will be reset hunting zone and reduce in scale factor 1/n, and the above-mentioned processing shown in Figure 46 and Figure 47 is repeated in each zoning.
Incidentally, needn't illustrate, can be with method minimum value vector (px, method py) that computational chart shows the decimal precision position that act on of the conic section on above-mentioned usage level direction and the vertical direction.
Incidentally, can tremble the processing that the step S121~S131 among Figure 46 and Figure 47 is carried out in vector detection unit 15 by hand, and can be by CPU1 by software execution processing subsequently.
<the three example 〉
As shown in figure 31, under the situation of use according to the motion vector detecting method of the embodiment of the invention, even when the reduce in scale factor of reference vector is 1/64, the inefficacy of diverse motion vector also can not appear exporting.Therefore, the size reduction to 1/4096 that can effectively SAD be shown.
Particularly, prepare to be contracted to 1/4096 dwindle the SAD table, and calculate first the motion vector in detecting with 1/64 reduce in scale factor.Next, the position of representing by the motion vector that will detect in first detects dwindles the hunting zone as the center of hunting zone, and carries out second and detect for example to be lower than first 1/8 the reduce in scale factor of reduce in scale factor in detecting.That is, when the detection of winning is differed from one another with the second reduce in scale factor that detects, the second reduce in scale factor that detects is set, it is fallen in first vector error scope that detects, can be with very high accuracy detection motion vector.
Handle with reference to the motion vector detection in flow chart description the 3rd example of Figure 48~Figure 51 below.
The 3rd example shown in Figure 48~Figure 51 uses above-mentioned first example as the basic exercise vector detection process.Therefore, treatment step S141~S149 among Figure 48 and the treatment step S151~S155 among Figure 49 and the treatment step S101~S109 among Figure 44 and the treatment step S111~S115 among Figure 45 are identical.
In the 3rd example, substitute in the step S155 of Figure 49 end process when calculating kinematical vector, and be to use the motion vector that in step S155, calculates as first motion vector that detects, among the step S156 below, based on the motion vector that in first detects, calculates, the hunting zone is contracted in the identical reference frame, and the reduce in scale factor of reference vector is changed into reduce in scale factor 1/nb less than the first reduce in scale factor 1/na that detects (na>nb) wherein.
Particularly, shown in Figure 52, when calculating the motion vector BLK_Vi of object block TB among the hunting zone SR_1 that in first handles, is provided with, can detect the piece scope that wherein has correlation between reference frame and the primitive frame roughly according to the motion vector BLK_Vi that calculates.Therefore, shown in the bottom of Figure 52, can have the piece scope that there is correlation between reference frame and the primitive frame therebetween and be set to second the hunting zone SR_2 in detecting as the narrower hunting zone at center.In this case, shown in Figure 52, the motion vector BLK_Vi that the displacement (hunting zone skew) during the center Poi_1 of hunting zone SR_1 and second handles in first processing between the center Poi_2 of hunting zone SR_2 detects in handling corresponding to first.
In addition, in the present embodiment, the reduce in scale factor that makes reference block in second detection is less than the reduce in scale factor in first detection.Therefore, the error that expectation can be littler detects motion vector in second detects.
In step S156, so be provided with after narrow hunting zone of religion and the new reduce in scale factor, among the step S161~S168 and step S171~174 among Figure 51 in step S157 and S158, Figure 50, carry out second motion vector detection that detects in identical mode in detecting with first and handle.The processing of these steps and treatment step S101~109 among Figure 44 and identical with treatment step S111~115 among Figure 45.
Therefore, the final expection monolithic motion vector that in step S174, obtains as second motion vector that detects.
Above example uses above-mentioned first example as the monolithic motion vector detecting method, and repeats this method in second stage.Yet, certainly second and afterwards stage in repeat this method, further dwindle the hunting zone simultaneously and change the reduce in scale factor as required.
In addition, needn't illustrate that alternative aforementioned first example is used as the monolithic motion vector detecting method with aforementioned second example.In addition, in previous examples, can be with method minimum value vector (px, method py) that computational chart shows the decimal precision position that act on of cubic surface on above-mentioned usage level direction and the vertical direction.
Incidentally, can tremble the processing that step S141~step S148 in the flow chart of Figure 48~Figure 51 is carried out in vector detection unit 15 by hand, and can be by CPU1 execution processing subsequently.
[addition in rotation and the translational movement addition unit 19 is handled]
At the frame with rest image is that unit obtains to tremble the translational component (translational movement of frame) that causes and rotational component (anglec of rotation of frame) afterwards by hand thus, rotates peaceful phase shift and adds unit 19 and carry out additions (stack) processing.
In the present embodiment, as mentioned above, in order to make the various objects of user to user expectation carry out so-called drawing (picture making), in this example, in image pick-up device, prepare three kinds of addition methods in advance, make the user to carry out selection operation, from three kinds of addition methods, select addition method according to the drawing of expection by operate input unit 3 via the user.
Notice that as mentioned above, in order to describe the object that simply only present embodiment is applied to rest image, but present embodiment also can be applicable to moving image in essence.Under the situation of moving image, because real-time performance, so there is the frame number upper limit of addition.Yet, being applied to each frame by method with present embodiment, present embodiment can be used the same apparatus of generation by the system of the moving image that senior noise reduction produced.
In the present embodiment, the peaceful phase shift of the rotation of the image pick-up device in Fig. 1 example is added unit 19 to be configured, feasible three kinds of methods that can optionally realize as addition (stack) method, that is, and simple additive process, average additive process and contest additive process.Below the details of this method will be described in proper order.Incidentally, in the present embodiment, for example, the number of image frames that be applied is eight.
Figure 53 considers that under the situation of simple additive process the peaceful phase shift of rotation adds the block diagram of relation between unit 19 and the video memory 4.In this case, rotating peaceful phase shift adds unit 19 and has rotation and translation processing unit 191, gain amplifier 192 and 193 and adder 194.
As mentioned above, the picture frame Fm after the 43 storage additions of the frame memory in the video memory 4.Yet, when order input picture frame, the first picture frame F1 as benchmark, therefore, is directly write the first frame F1 in the frame memory 43.On the other hand, with second and afterwards picture frame Fj (j=2,3,4...) be stored in the frame memory 42 of video memory 4, provide it to the peaceful phase shift of rotation then and add unit 19.Incidentally, because need be to allowing displacement (corresponding to translational movement (α, β) and the anglec of rotation (γ)) the addition result hypothesis picture frame size of picture frame, so the frame memory 43 in the video memory 4 has the displacement that allowed greater than a frame zone (corresponding to the zone of translational movement (α, β) and the anglec of rotation (γ)) at least.
From CPU1 receive with respect to second or afterwards the picture frame Fj of the first picture frame F1 (α β) and the information of the anglec of rotation (γ), rotates and translation processing unit 191 translations and rotate second or picture frame Fj afterwards about translational movement.By reading second or afterwards picture frame Fj from frame memory 42 in the mode of trembling with respect to first picture frame F1 cancellation phase adversary, rotation and 191 translations of translation processing unit also rotate second or picture frame Fj afterwards.
Particularly, rotation and translation processing unit 191 according to from CPU1 about picture frame Fj with respect to the translational movement of first picture frame (α, β) and the information calculations pixel of the anglec of rotation (γ) will be superimposed on first picture frame of memory 43 or the pixel among the addition picture frame Fm afterwards from the pixel address among second or afterwards the picture frame Fj of frame memory 42.The pixel data of rotation and translation processing unit 191 reading images frame Fj from the address of frame memory 42, this address is a result of calculation.
Incidentally, in the present embodiment,, the pixel data order that writes the place, address location of first picture frame is read from frame memory 43 when addition during from second and afterwards picture frame of frame memory 42.Then, the pixel address in rotation and 191 pairs second of translation processing units or the picture frame order computation frame memory 42 afterwards, this pixel address is corresponding to reading the address location in the frame memory 43.
To the gain every pixel data (illumination intensity signal component and color difference components) of (multiplication coefficient) w1 and second or the two field picture Fj afterwards that come peaceful translation that moves processing unit 191 of spinning and rotation of gain amplifier 192 multiplies each other.Gain amplifier 192 offers adder 194 with the result.Gain amplifier 193 will gain (multiplication factor) w2 with multiply each other from first picture frame of frame memory 43 or the every pixel data of the picture frame Fm after the addition.Gain amplifier 193 offers adder 194 with the result.
Adder 194 writes back (rewriting) identical address in the frame memory 43 with the every pixel data of the picture frame Fm after the addition.
In the present embodiment, gain amplifier 192 is always w1=1 for the gain w1 of the pixel data of second or afterwards the picture frame Fj (being called the addition image) that read.
On the other hand, gain amplifier 193 for from first picture frame of frame memory 43 or as the gain w2 of the pixel data of the picture frame Fj (being called the addition image) of addition result second afterwards picture frame Fj have under the situation of the respective pixel that will be added with second or afterwards picture frame Fj do not have being added under the situation of respective pixel in (zone that can not superpose each other) different as translation and two picture frames of rotation result.
That is because the rotation and the translation of addition image, always occur not having its pixel will with the zone of the pixel of the addition image that is added image addition.When existing the pixel that is added, gain w2 is w2=1.When not existing, suppose the frame number of gain w2, and adopt different values, and be w2=j/ (j-1) the gain w2 of j picture frame according to addition picture frame Fj that will stack with the pixel that is added.
Thus, present embodiment has reduced to have the zone of the pixel that will be added in the image as addition result and has not had the perceptible difference of boundary member between the zone of the pixel that will be added.
In order to control this gain, 191 couples of CPU1 of rotation in the present embodiment and translation processing unit provide remarked pixel will be superimposed among the first picture frame Fm frame memory on the pixel 42 whether have second or picture frame afterwards in pixel address, that is the frame memory 42 information EX that whether has the pixel that to be added.Receive this information, the gain w2 of CPU1 ride gain amplifier 193.
Incidentally, substitute the CPU1 of the gain w2 of ride gain amplifier 193, according to pixel will be superimposed on the frame memory on the pixel 42 among the picture frame Fm whether have second or afterwards picture frame in pixel address, rotation and translation processing unit 191 can provide the w2 that gains to gain amplifier 193.
Figure 54 shows under the situation of simple additive process in j the picture frame addition of each.Figure 54 illustrates and reuse adder 194 and frame memory 43 with a plurality of picture frames that superpose each other (being eight picture frames) in the example of Figure 54.In Figure 54, the numeral number of image frames in the circle, the value of gain (multiplication coefficient) w2 is used for being added image, and the value in the bracket is corresponding to the situation that does not have the addition pixel.
Figure 55 is an aid illustration when the flow chart of the treatment step when phase shift adds unit 19 and carries out simple additive process according to the peace of the rotation in the image pick-up device of the embodiment of the invention.Incidentally, each step in Figure 55 flow chart is mainly carried out under the control of CPU1.
At first, CPU1 carries out control, first picture frame is stored to (step S181) in the frame memory 43.Next, CPU1 will represent the variable j of processed number of image frames (representing second frame) is set to j=2, and it represents second frame (step S182).
Then, CPU1 carries out control, so that j picture frame is stored in (step S183) in the frame memory 42.Next, as mentioned above, under the control and indication of CPU1, hand is trembled vector detection unit 15 and is calculated overall motion vector or translational movement and the anglec of rotation of j picture frame with respect to first picture frame, then, the translational movement and the anglec of rotation of calculating is sent to CPU1 (step S184).
Next, receive the translational movement and the anglec of rotation from CPU1, rotate peaceful phase shift and add unit 19 read j picture frame from frame memory 42, rotation simultaneously is j picture frame of translation also.Simultaneously, rotate that peaceful phase shift adds that unit 19 reads first picture frame from frame memory 43 or as the picture frame (step S185) of addition result.Incidentally, the image that will read from frame memory 42 is called the addition image, and the image that will read from frame memory 43 is called and is added image.
Next, rotating peaceful phase shift adds unit 19 will to have all the pixel data of addition image of the gain w1 that is made as " 1 " and w2 added together with the pixel data that is added image.Yet, the zone that the quilt of the addition image that do not superpose is thereon added image, promptly, when not existing pixel data will be with the pixel data of the addition image of the pixel data addition that is added image the time, the gain w1 of the pixel data of addition image is set to w1=0, and the gain w2 that is added the pixel data of image is set to w2=j/ (j-1) (step S186).
Then, rotating peaceful phase shift adds unit 19 and will be written back to frame memory 43 (step S187) as the view data of addition result.
Next, CPU1 determines whether the picture frame of predetermined number superpose each other (step S188).When CPU1 determined not finish the stack of predetermined number picture frame, it was j=j+1 (step S189) that CPU1 increases the variable j that represents processed number of image frames.Then, handle and turn back to step S183, with the processing below the repeating step S183.
When in step S188, when CPU1 determined to finish the stack of picture frame of predetermined number, CPU1 finished the processing procedure of Figure 55.
Simple additive process will be added image and appended drawings picture with addition each other, and wherein, except that the zone that does not have the addition pixel (as broad as long between illumination intensity signal and the color difference signal), the gain that will be added image and addition image is set to " 1 " all the time.Therefore, the image of addition generation brightens gradually.
Therefore, when using simple additive process, can realize screening-mode, wherein, carry out when taking continuously will the way in result's (being added image) of addition be presented on the watch-dog, when image reached expectation illumination, the user stopped continuous shooting.
Because the ISO speed with camera when shooting needs the low-light (level) object of long time for exposure basically continuously keeps very low, the user can check that the image that addition produces brightens gradually.This is corresponding to the image of time exposure.Better, not only can monitor the image that addition produced in the way, and can monitor its histogram.In addition, certainly, image pick-up device can be determined frame number added together automatically.
(2) average additive process
This average additive process is similar to above-mentioned simple additive process, but to the addition image and added the gain w1 of image and w2 aspect be different from simple additive process.Particularly, in average additive process, when second image and first image addition, first image and second image are multiplied each other with gain w1 and w2 with 1/2 value, added together then, simultaneously be made as j image of addition under the situation of w2=(j-1)/j at the gain w2 that the gain w1 of addition image is made as w1=1/j and is added image.
That is, no matter how many frame numbers added together be, all will remain unchanged as the illumination of the addition image of addition result, and make j to be added image weight separately and be equal to each other.When do not exist the pixel will be by with the pixel of the addition image of the pixel addition that is added image the time because translation and rotation make, the gain w2 that is added the pixel data of image be set to 1, thus on entire frame the illumination of maintenance addition result.
Figure 56 is considering that the rotation peace phase shift under the mean value additive process situation adds the block diagram of relation between unit 19 and the video memory 4.In this case, as situation, rotate peaceful phase shift and add unit 19 and have rotation and addition processing unit 191, gain amplifier 192 and 193 and adder 194 in the simple additive process as shown in Figure 53.The mean value additive process is with simple additive process part: gain amplifier 192 and 193 gain w1 and w2 are different and different according to the number of image frames that will be added, therefore, provide the value of gain w1 and w2 from CPU1.
Figure 57 shows the addition of each j the picture frame under average additive process situation.Figure 57 shows and reuse adder 194 and frame memory 43 with a plurality of picture frames that superpose each other (being eight picture frames) in the example of Figure 57.In Figure 57, the numeral number of image frames in the circle, the value of gain (multiplication coefficient) w2 is used for being added image, and the value in the bracket is corresponding to the situation that does not have the addition pixel.
Shown in Figure 57, the gain w1 of j addition image is w1=1/j, and the gain w2 that the quilt that removes j image adds image is w2=(j-1)/j.
Figure 58 is an aid illustration when the flow chart of the treatment step when phase shift adds unit 19 and carries out average additive process according to the peace of the rotation in the image pick-up device of the embodiment of the invention.Incidentally, each step in the flow chart of Figure 58 is mainly carried out under the control of CPU1.
At first, CPU1 carries out control, first picture frame is stored to (step S191) in the frame memory 43.Next, CPU1 will represent that the variable j of processed number of image frames is set to j=2, and it represents second frame (step S192).
Then, CPU1 carries out control, so that j picture frame is stored in (step S193) in the frame memory 42.Next, as mentioned above, under the control and indication of CPU1, hand is trembled vector detection unit 15 and is calculated overall motion vector or translational movement and the anglec of rotation of j picture frame with respect to first picture frame, then, the translational movement and the anglec of rotation of calculating is sent to CPU1 (step S194).
Next, receive the translational movement and the anglec of rotation from CPU1, rotate peaceful phase shift and add unit 19 read j picture frame from frame memory 42, rotation simultaneously is j picture frame of translation also.Simultaneously, rotate that peaceful phase shift adds that unit 19 reads first picture frame from frame memory 43 or as the picture frame (step S195) of addition result.Incidentally, the image that will read from frame memory 42 is called the addition image, and the image that will read from frame memory 43 is called and is added image.
Next, the gain w2 that is made as w1=0 and is added the pixel data of image at the gain w1 of the pixel data of addition image is made as under the situation of w2=j/ (j-1), and rotating peaceful phase shift, to add unit 19 added together with the pixel data that is added image with the pixel data of addition image.Yet, the zone that the quilt of the addition image that do not superpose is thereon added image, promptly, when not existing pixel data will be with the pixel data of the addition image of the pixel data addition that is added image the time, the gain w1 of the pixel data of addition image is set to w1=0, and the gain w2 that is added the pixel data of image is set to w2=1 (step S186).
Then, rotating peaceful phase shift adds unit 19 and will be written back to frame memory 43 (step S197) as the view data of addition result.
Next, CPU1 determines whether picture frame with predetermined number superpose each other (step S198).When CPU1 determined not finish the stack of picture frame of predetermined number, it was j=j+1 (step S199) with the variable j of processed number of image frames that CPU1 increases expression.Then, handle and turn back to step S193, with the processing below the repeating step S193.
When in step S198, when CPU1 determined to finish the stack of picture frame of predetermined number, CPU1 finished the processing procedure of Figure 58.
As the application of using average additive process, has multiple (gimmick) function (special-effect) that the motion object fades away according to the image pick-up device of present embodiment.That is, average additive process can realize new screening-mode, wherein, although the illumination of image is constant when addition first frame, along with the execution of continuous shooting, the fuzzy gradually and disappearance of the motion parts in the picture frame.Incidentally, during each addition picture frame, as the addition effect, the noise in the picture frame fades away equally, and this is a secondary effect.
(3) contest additive process
Simple additive process peace all additive process all the time first frame be set to benchmark image, the position of second image and image afterwards is with respect to first image, then with second and afterwards the image and first image addition.On the other hand, contest additive process is treated all images with being equal to.Therefore, the benchmark image of setting is not limited to first image, and can be any image.Yet, need translation and per two images added together of rotation.
Figure 59 is considering that the rotation peace phase shift under the contest additive process situation adds the block diagram of relation between unit 19 and the video memory 4.In this case, rotating peaceful phase shift adds unit 19 and has two rotations and translation processing unit 195 and 196, gain amplifier 197 and 198 and adder 199.
As mentioned above, video memory 4 has and is used for trembling vector detection unit 15 at hand and detects at least two frame memories 41 and 42 and be used to store frame memory 43 as image addition result's picture frame that hands are trembled the processing of vector.For this contest additive process, frame memory 43 can further be stored the number of image frames that will be added.
That is, when having selected the contest additive process, image pick-up device obtains a plurality of picture frames that will be added to together continuously, and all images frame is stored in the video memory 4.Then, picture frame of image pick-up device is set to benchmark image, and the beginning addition is handled.
In the example that will describe, use eight picture frames to carry out the contest addition below.Among Figure 59 in the video memory 4 the symbol F1 to F8 in the circles represent to obtain continuously and storage picture frame.
Suppose when the beginning addition is handled, tremble the monolithic motion vector that calculated eight all picture frames in vector detection unit 15, overall motion vector etc. by hand.
Yet as mentioned above, hand is trembled vector detection unit 15 and only can be detected with respect to the relative motion vectors of frame before tight or with the motion vector of first frame as benchmark.Therefore, need to allow the benchmark image of cumulative errors or use setting to carry out detection once more.
Figure 60 shows the summary of contest additive process.Numeral in Figure 60 circle is corresponding to eight picture frame F1~F8.In this example, as the addition in the phase I, with picture frame F1 and F2, picture frame F3 and F4, picture frame F5 and F6 and picture frame F7 and F8 addition each other.
In the addition in the phase I, rotation and translation processing unit 195 with 196 with corresponding to the amount of trembling of adversary mutually with respect to the benchmark image that is provided with, translation and rotate per two picture frames of addition each other.
After the addition of finishing in the phase I, stand addition in the second stage as the image of addition result in the phase I.As the addition in the second stage, in the example of Figure 60, with the addition result of picture frame F1 and F2 and the addition result addition each other of picture frame F3 and F4, and with the addition result of picture frame F5 and F6 and the addition result addition each other of picture frame F7 and F8.In the addition of second stage, the picture frame that each will addition is consistent with benchmark image, and therefore, translation and rotation processing in rotation and translation processing unit 195 and 196 are unnecessary.
After the addition in finishing second stage, stand addition in the phase III as the image of addition result in the second stage.As the addition in the phase III, in the example of Figure 60, with the addition result of picture frame F1, F2, F3 and F4 and the addition result addition each other of picture frame F5, F6, F7 and F8.In the addition of phase III, the picture frame that each will addition is consistent with benchmark image, and therefore, translation and rotation processing in rotation and translation processing unit 195 and 196 are unnecessary.
Turn back to Figure 59, when beginning was handled in addition, CPU1 at first was provided with in the phase I two picture frames of addition each other, and the picture frame that is provided with is offered for example rotation and translation processing unit 195 and 196 with respect to the translational movement and the anglec of rotation of benchmark image.
Rotation and translation processing unit 195 and 196 read the view data of two respective image frame respectively and use simultaneously from translational movement and the anglec of rotation translation and the rotation diagram picture frame of the picture frame of CPU1 reception from video memory 4, therefore, the phase adversary who eliminates with respect to benchmark image trembles.
Then, multiply each other with gain w4 by two picture frames of gain amplifier 197 and 198 spinnings in the future and translation processing unit 195 and 196 and the w3 that gains, afterwards, addition each other in adder 199.Then, the view data after the addition is write in the buffer storage of video memory 4.
CPU1 carries out and makes other two picture frames stand the same treatment of the addition in the phase I shown in Figure 60 as mentioned above.Therefore, corresponding identical, rotation and translation processing unit 195 and 196 picture frame additions each other with other two appointments, the while is translation and rotation diagram picture frame in the same manner described above.Addition result is stored in the video memory 4.
After the addition of finishing in the phase I, in order to carry out the addition in the second stage, CPU1 will be provided as the picture frame that will read as the picture frame of addition result in the phase I from video memory 4, the translational movement and the anglec of rotation are set to zero, and provide the instruction of reading picture frame to rotation and translation processing unit 195 and 196.
Rotation and translation processing unit 195 and 196 read the view data as the picture frame of addition result in the phase I according to information and indication from CPU1, and carry out the addition in the second stage shown in Figure 60 and handle.
After the addition in finishing second stage, in order to carry out the addition in the phase III, CPU1 is set to the picture frame that will read from video memory 4 as the picture frame of addition result in the second stage, the translational movement and the anglec of rotation are set to zero, and provide the instruction of reading picture frame to rotation and translation processing unit 195 and 196.
Rotation and translation processing unit 195 and 196 read the view data as the picture frame of addition result in the second stage according to information and indication from CPU1, and carry out the addition in the phase III shown in Figure 60 and handle.Therefore, the contest addition in this example finishes.
Figure 61 shows the flow process that gain amplifier 197 in eight picture frame additions under contest addition method situation and 198 gain (multiplication coefficient) w3 and w4 and addition handle.
The example that multiplication coefficient w3 shown in Figure 61 and w4 are based on above-mentioned average additive process.In the zone that two images superpose each other, multiplication coefficient is w3=w4=1/2, and in the zone that two images do not superpose each other, multiplication coefficient is w3=1 and w4=1.
Incidentally, multiplication coefficient w3 and w4 are not limited to the value based on average additive process, and can use the value based on simple additive process.
Although omit in the superincumbent description, but because two images that will addition in ground floor with respect to benchmark image translation and rotation, so the contest additive process according to present embodiment has the mechanism that is used for determining location of pixels, wherein, in this pixel position, two images do not have second and layer afterwards in respective pixel in the benchmark image zone in the addition.
Particularly, during addition in ground floor, substitute " 0 " as the pixel value of the luminance component Y of addition result with value " 1 ".Then, alternatively, the pixel value of the luminance component Y of the addition result of pixel position (in the phase I two of addition images all have respective pixel as translation and rotation result) each other is set to " 0 ".
Then, second and the addition of afterwards layer in, when the pixel value of the luminance component Y of two frames is " 0 ", the luminance component Y after the addition also is set to " 0 ".When with all images when added together each other, all pixels always comprise valid pixel (pixel of benchmark image), therefore, substitute with the pixel value of benchmark image and to have the pixel of " 0 " brightness value.
Therefore, identify as inactive pixels, do not have the sign of the location of pixels of valid pixel in the time of can in the image data format that the capacity of keeping does not have to increase, being provided for determining stack by the pixel value " 0 " that uses luminance component Y.
Certainly, do not exist the location of pixels of valid pixel that 1 bit is set in the time of can be respectively to above-mentioned stack and identify as inactive pixels, no matter whether pixel value is the expression of luminance component Y or color difference components Cb/Cr, can be with any pixel value with making a check mark.With viewpoint, use the method for inactive pixels sign to think the best approach according to present embodiment with above-mentioned to cost and image quality influence.
Figure 62 and Figure 63 be aid illustration when in according to the image pick-up device of present embodiment by rotating the flow chart of the processing procedure of peaceful phase shift when adding unit 19 and carrying out above-mentioned contest additive process.Incidentally, each step in the flow chart of Figure 62 and Figure 63 is mainly carried out under the control of CPU1.
At first, CPU1 order writes the view data of first to the 8th picture frame and be stored in the frame memory of video memory 4 (step S201).Next, CUP1 is provided with a benchmark image (step S202) in first to the 8th picture frame.Then, CPU1 is with respect to the translational movement and the anglec of rotation (step S203) of calculating first~the 8th picture frame in advance as the picture frame of benchmark image.
Then, CPU1 begins the addition processing in the phase I.CPU1 adds unit 19 and provides about first picture frame and second picture frame with respect to the translational movement of reference map picture frame and the information of the anglec of rotation rotating peaceful phase shift.Rotate peaceful phase shift add unit 19 based on from CPU1 about the translational movement of two images and the information of the anglec of rotation, with with respect to eliminate the translational movement of first picture frame and second picture frame and the mode of the anglec of rotation as the picture frame of benchmark image, from the respective frame memory of video memory 4, read the view data (step S204) of first picture frame and second picture frame simultaneously.
Then, under the control of CPU1, rotating peaceful phase shift, to add unit 19 added together with the view data of the view data of first picture frame and second picture frame with gain w3=1/2 and gain w4=1/2 in the view data of the view data of reading first picture frame and second picture frame, then addition result write the frame memory (step S205) of video memory 4.
In step S205, order is provided with location of pixels in the picture frame of benchmark image (writing the location of pixels of the pixel data after the addition), be the view data of searching for first picture frame and second picture frame corresponding to the pixel of the location of pixels of each setting, and read pixel and added together.When a pixel of first picture frame and second picture frame does not exist, do not exist the gain of view data of the picture frame of pixel to be set to " 0 ", and exist the gain of view data of the picture frame of pixel to be set to " 1 ".
In addition, in step S205, when first picture frame and second picture frame do not have respective pixel, be set to " 0 " as the pixel value of the luminance component Y of addition result.In addition,, pixel value is changed into " 1 " under the situation of " 0 " in the addition result of the view data when having respective pixel.
Next, CPU1 adds unit 19 and provides indication with the processing to the 3rd picture frame and the 4th picture frame, the 5th picture frame and the 6th picture frame and the 7th picture frame and the 8th picture frame execution in step S204 and step S205 for rotating peaceful phase shift, and rotates peaceful phase shift and add unit 19 and carry out these processing (the step S211 among Figure 63).
Next, CPU1 adds unit 19 and provides indication to handle with the addition of beginning in the second stage for rotating peaceful phase shift.According to indication from CPU1, rotate peaceful phase shift and add unit 19 and from video memory 4, read, and do not carry out translation and rotation as first image and the second image addition result's view data and as the 3rd image and the 4th image addition result's view data.Then, all to be set to 1/2 gain w3 and w4, with two pictures addition each other (step S212).
In step S212, when pixel data will be added together first image and the luminance component Y of the pixel data of the addition result of the luminance component Y of the pixel data of the addition result of second image and the 3rd image and the 4th image in one when being " 0 ", luminance component Y with two images that are used for addition is set to " 0 " for the gain of the pixel data of the image of " 0 ", and the gain of the pixel data of other image is set to " 1 ".
In addition, in step S212, when the luminance component Y of the pixel data of two images that will be added together was " 0 ", also the pixel value as the luminance component Y of addition result was set to " 0 ".
Next, CPU1 adds unit 19 and provides indication with to as the 5th image and the 6th image addition result's view data and as the above-mentioned processing of the 7th image and the 8th image addition result's view data execution in step S212 for rotating peaceful phase shift, and rotates peaceful phase shift and add unit 19 and carry out these processing (step S213).
Next, CPU1 adds unit 19 and provides indication to handle with the addition of carrying out in the phase III for rotating peaceful phase shift, promptly, to the above-mentioned processing of the addition result execution in step S212 of the addition result of first to fourth image and the 5th to the 8th image, and rotate peaceful phase shift and add unit 19 and carry out these processing (step S214).
Therefore, by the processing that a plurality of picture frames are added together finishes according to the contest additive process of present embodiment.
Above-mentioned contest additive process according to present embodiment has 2 points.Any be in the phase I of contest additive process except that benchmark image in per two addition image additions by translation and rotation, and second and afterwards stage in per two image additions are not carried out translation or rotation.Addition in phase I is corresponding to the processing of A among Figure 62, and second and the stage afterwards in addition corresponding to the processing of B among Figure 63.
According to the contest additive process of present embodiment on the other hand be provided with two images can determining in the phase I do not have second and afterwards stage in the mechanism of location of pixels of added pixels.
Incidentally, in above-mentioned example, eight picture frames are carried out the contest addition.In the contest additive process according to present embodiment, it is very important that the image that obtains continuously in advance is stored in the video memory 4, but the picture number that obtains continuously in advance is unessential.Yet, consider the character of contest additive process, expectation will picture number added together each other be expressed as 2 power.
Contest additive process according to present embodiment has two advantages.As mentioned above, an advantage is to select benchmark image arbitrarily after all images that obtains to be added together.When carrying out when determining motion vector when taking continuously, tremble the frame of track center and be chosen as benchmark image being positioned at hand during taking continuously, can make the effective coverage of addition result image the wideest.
Another advantage is to treat each two field picture with being equal to fully.For example, under the situation of the average additive process that has illustrated, the addition coefficient changes according to frame number, makes the weight of each frame in the addition result be equal to each other, but has occurred error in rounding number inevitably.As a result, the weight of each frame is not identical.On the other hand, in contest additive process, after frame and identical multiplication,, make the influence of rounding error can not lack of proper care with the frame addition according to present embodiment.
Yet, in the contest additive process, owing to all be stored in all images in the memory in advance, so need a large amount of memories, therefore, have the frame number upper limit obtain continuously, make can not picture before described simple additive process and on average in the additive process ad infinitum carry out continuous addition.
Yet, when adopting the image that will obtain continuously to be stored in the structure of external memory storage (price of bit cell is very low, for example, and hard disk) time, can avoid the problems referred to above. temporarily
Recently, tremble influence but also prevent that the fuzzy method of motion object from having caused the attention in market as not only being used to prevent hand, this method is carried out high-speed capture with high speed in about 1/60 second short exposure time, and the influence and the fuzzy appearance easily of motion object that do not allow hand to tremble.
Problem in this case is the value that ISO speed can be set when the noise level maintenance is very low.Owing to improve speed and make picture noise more obvious usually simultaneously,, and enlarge the numerical value that (boast) can keep the S/N grade of specific grade as the highest ISO speed of performance so the manufacturer of digital camera reduces noise by making in all sorts of ways.
Being used for of present embodiment trembled correction as the hand of the rest image of object a purpose is a noise reduction.When a plurality of image additions together the time, are not carried out addition or only separately the tracking addition is searched for and carried out to part by a part that detects the motion object, can realize noise reduction with very high speed, like this equally for the motion object.
Under the situation of random noise, when with N image addition together the time, noise component(s) reduces with the subduplicate ratio statistics ground of N.That is, by the setting to the motion object, when having digital camera corresponding to the real-time performance value of ISO3200 with 16 image additions together the time, the ISO speed of setting can be expanded as this value 4 times, that is, and and ISO12800.
Even with the image addition of fixed number together, Qi Wang phase adding system also can take the specific processing time in this case, but it provides high as far as possible image quality.Contest additive process according to present embodiment satisfies this requirement.The improvement of ISO speed when in other words, the application that is suitable for the contest additive process comprises high-speed capture.
As mentioned above, have three kinds of additive processes according to the image pick-up device of present embodiment, that is, and simple additive process, average additive process and contest additive process.As mentioned above, each in three kinds of additive processes all has best digital camera application.
Allowing the user to operate input unit 3 by the user according to the image pick-up device of present embodiment selects to use in three kinds of additive processes that a kind of.Therefore, the user can select additive process according to the addition result of user expectation.
Incidentally, alternate user is directly selected a kind of in three kinds of additive processes, image pick-up device can be configured to have the best applications as three kinds of additive processes of optional function, make when the user selects a kind of application, CPU1 selects to be used for the best approach of this application automatically.
Then, digital camera can be realized application that three kinds of new hand-held long exposures take, style that the motion object fades away simultaneously and take with the high speed that exceeds the actual performance value.
[second embodiment of image processing apparatus]
Trembling vector detection unit 15 two images of hypothesis (that is the image of the image of primitive frame and reference frame) as the hand in the image pick-up device of above-mentioned image processing apparatus first embodiment all is stored in the frame memory of the frame memory cell 4 shown in Fig. 1.Therefore, detection constant time lag one frame of motion vector.
On the other hand, second embodiment uses stream picture data from image pick-up element 11 as the reference frame, and can carry out the real-time calculating of sad value to the raster scan flow data.
Figure 64 is the block diagram that illustrates according to the structure example of the image pick-up device of second embodiment.As can be seen, first embodiment's is identical shown in the block structure of image pickup signal treatment system 10 and other block structure and Fig. 1 from Figure 64.Yet the image storage unit 4 among second embodiment has two frame memories 44 and 45 to form.Frame memory 44 is used for motion vector detection.Frame memory 45 is used for the two field picture stack.
Incidentally, in fact as everyone knows, when using the frame memory that can not write simultaneously and read, switch two frame memories between frame memory 44 is alternately writing and reading and use two frame memories in every frame.
As will be described later, use is from the input pixel data of Date Conversion Unit 14 pixel data as the reference frame, and use the data be stored in the frame memory 44 data, hand to tremble vector detection unit 15 and carry out and generate processing, the processing that detects the monolithic motion vector of dwindling the SAD table, generate processing that adds up to the SAD table and the processing that generates overall motion vector (hand is trembled vector) as primitive frame.In a second embodiment, with identical before, remove overall motion vector (translational component that hand is trembled) and translational movement (α, β) outside, hand is trembled motion vector detecting unit 15 and is also detected the anglec of rotation γ of reference frame with respect to primitive frame.
Incidentally, in this example, hand is trembled vector detection unit 15 and is always obtained to tremble vector with respect to the phase adversary of image before tight.Therefore, tremble, the hand from first image to present image is trembled component carry out integration in order to calculate with respect to first benchmark image (referring to the picture frame among Fig. 3 120) phase adversary.
Then, after a frame delay, rotate peaceful phase shift add unit 19 with above-mentioned first embodiment in identical mode tremble translational component and detection according to the hand that detects the anglec of rotation in will being stored in frame memory 44 the picture frame excision and rotation simultaneously in subtract with its be stored in the image addition in the frame memory 45 or this picture frame asked average.By repeating this processing, in frame memory 45, generate have higher S/N than and high-resolution and do not have hand to tremble the picture frame 120 (referring to Fig. 3) of the rest image of component influence more.
Then, resolution conversion unit 16 is according to the image that has a predetermined resolution predetermined image size from excision in the picture frame of control indication from frame memory 45 of CPU1.As mentioned above, resolution conversion unit 16 will offer coding and decoding unit 17 as the image of record captured image data, and will offer NTSC encoder 18 as the image of monitoring image.
In a second embodiment, primitive frame is stored in the frame memory 44, and from the reference frame of Date Conversion Unit 14 inputs as stream.Hand among first embodiment is trembled vector detection unit 15 and is used the two picture data that are stored in two frame memories 41 and 42 to determine the sad value of reference block.On the other hand, shown in Figure 64, the hand among second embodiment is trembled vector detection unit 15 and is used from the stream picture data of Date Conversion Unit 14 as the view data of reference frame and be stored in view data in the frame memory 44 is determined reference block as the view data of primitive frame sad value.
As mentioned above, second embodiment uses from the stream picture data of Date Conversion Unit 14 view data as the reference frame.Therefore, a plurality of reference blocks that have as the input pixel of element appear in the reference frame simultaneously.Figure 65 is the diagrammatic sketch of this situation of aid illustration.
Particularly, Figure 65 shows input pixel Din in the hunting zone in the reference frame 102 105 for example for being positioned at corresponding to the pixel in reference block 1061 left sides of reference vector 1071 and being positioned at pixel corresponding to reference block 1062 upper right sides of reference vector 1072.
Therefore, when thinking that input pixel Din belongs to reference block 1061, need read the pixel D1 in the object block 103, and the difference of calculating pixel D1.When thinking that input pixel Din belongs to reference block 1062, need read the pixel D2 in the object block 103, and the difference of calculating pixel D2.
Although, in Figure 66 of Figure 65 and description after a while, only show two reference blocks, in fact exist to have a large amount of reference blocks of conduct with reference to the input pixel Din of piece interior element for simply.
Brightness value γ by calculating input pixel Din and pixel be corresponding to difference absolute value between the brightness value γ of pixel in the object block of input location of pixels in each reference block, and according to corresponding to the reference vector of each reference block the difference absolute value calculated and SAD epiphase Calais being carried out SAD calculating among second embodiment.
For example, shown in Figure 66, when thinking that input pixel Din belongs to reference block 1061, by writing the Calais mutually with the difference absolute value between pixel D1 in the object block 103 and the input pixel Din and corresponding to the sad value of the SAD table element 1091 of the reference vector 1071 of SAD table 108.Shown in Figure 66, when thinking that input pixel Din belongs to reference block 1062, by writing the Calais mutually with the difference absolute value between pixel D2 in the object block 103 and the input pixel Din and corresponding to the sad value of the SAD table element 1092 of the reference vector 1072 of SAD table 108.
Therefore, when the input pixel of All Ranges in the inputted search scope, finish the SAD table, and processing finishes.
Figure 66 is the diagrammatic sketch that aid illustration is applied to real-time SAD computing existing method situation.In a second embodiment, in Figure 65, substitute the phase adduction and write the difference absolute value of calculating as sad value corresponding to the SAD table 1091 of the reference vector 1071 of SAD table 108 or 1072 or 1092, identical with first embodiment, calculating is dwindled with reference to the references of dwindling vector 1071 or 1072 acquisitions with reduce in scale factor 1/n and is dwindled vector, from the difference absolute value of calculating, determine will distribute and add to mutually the distribution and the additive value of sad value (corresponding to dwindling the adjacent a plurality of reference vectors of vector) with reference, and the distribution that will obtain and additive value and corresponding to the sad value addition of a plurality of adjacent reference vectors.
In order after SAD table (dwindling the SAD table) is finished, to calculate accurate motion vectors, second embodiment can also adopt in the horizontal direction with vertical direction on use the method for quadratic surface and cubic curve, that describes among this method and before first embodiment is identical.
Figure 67 and Figure 68 are that the SAD table of trembling among step S32 among Figure 36 in the vector detection unit 15, step S52, step S72 among Figure 40 among Figure 38 and the step S82 among Figure 41 each object block at hand under the situation of second embodiment that dwindles generates and handles and flow chart that the monolithic motion vector detection is handled.
At first, hand is trembled vector detection unit 15 at any position of input picture frame (reference frame) (x, y) end pixel data Din (x, y) (step S221).Next, be provided with corresponding to comprising location of pixels (x, one reference vector (vx, vy) (step S222) in a plurality of reference blocks y).
Next, the reference vector of calculating and setting (vx, the pixel value Ii (x among reference block Ii vy), y) with object block Io in pixel value Io (x-vx, y-vy) the difference absolute value α between, wherein, (x-vx is y-vy) corresponding to pixel value Ii (x, y) (step S223) for pixel value Io.That is, difference absolute value α is calculated as
A=|Io (x-vx, y-vy)-Ii (x, y) | ... (equation 4)
Next, the reduce in scale factor is set to 1/n, and calculates by (vx, vy) vector (vx/n, vy/n) (step S224) are dwindled in the reference that is contracted to 1/n and is obtained with reference vector.
Next, detect with reference to dwindling vector (vx/n, vy/n) adjacent a plurality of reference vectors (being above-mentioned four adjacent reference vectors in this example) (step S225).Then, as mentioned above, based on by with reference to the relation of dwindling between the position of vector representation and the position of representing by adjacent reference vector respectively, will be used as corresponding to four according to the difference absolute value α that in step S223, obtains and to detect the table element distribution of adjacent reference vectors and the value (difference absolute value) of addition is calculated as linear weighted function distribution value (step S226).Then, with four linear weighted function distribution values obtaining respectively with value addition (step S227) corresponding to the SAD table element of adjacent reference vector.
Next, determine whether to comprising that (x, all reference blocks y) have carried out the operation (step S228) of step S222~S227 to pixel data Din.When determine not to comprise pixel data Din (x, when another reference block y) is carried out this operation, processing turn back to step S222 with setting comprise pixel data Din another reference block (vx, vy), and the processing of repeating step S222~S227.
When in step S228, determining to comprising pixel data Din (x, when all reference blocks y) have been carried out the operation of step S222~S227, determine whether all the input pixel Din in the hunting zone to be finished the processing (the step S231 among Figure 68) of above operating procedure.When determining all the input pixel Din in the hunting zone not to be finished the processing of above operating procedure, handle and turn back to step S221, with the next input pixel Din in the acquisition search scope, and the following processing of repeating step S221.
When in step S231, determining all the input pixel Din in the hunting zone to have been finished the processing of above operating procedure, determine to finish and dwindle the SAD table.The detection minimum sad value of finishing (step S232) in the SAD table that dwindles.
Next, (mx, the sad value of the minimum sad value of my) locating (minimum value) and a plurality of neighboar lists element (being above-mentioned 15 neighboar lists elements in this example) generates quadratic surface (step S233) to use the table element address.Calculate expression corresponding to the minimum value vector of the decimal precision position of quadric minimum sad value correspondence (px, py).
Then, (px py) multiplies each other with n and calculates motion vector (px * n, py * n) (the step S235) that will obtain the minimum value vector by will representing the decimal precision position.
Incidentally, identical with above-mentioned example equally in this example, can use the method for cubic curve with acting on minimum value vector (px, method py) that computational chart shows the decimal precision position in the horizontal direction with on the vertical direction with above-mentioned.
In addition, equally in a second embodiment, identical with the 3rd example with reference to the flow chart description of Figure 48 to Figure 51 among before first embodiment, certainly in two or more stages, reuse the motion vector detection of dwindling the SAD table and handle, dwindle the hunting zone simultaneously and change the reduce in scale factor as required.
Compare with first embodiment, second embodiment has can be dwindling frame memory corresponding to the amount of a frame, and can shorten the advantage that input picture is stored in the time shared in the frame memory.Much less the effect that reduces of memory, the shortening processing time that arouses attention recently is prior.
[the 3rd embodiment]
Above-mentioned second embodiment trembles the vector and the anglec of rotation by always input picture and tight preceding image being compared to determine the hand of input picture.As mentioned above, in fact, as shown in Figure 3, first frame is set to benchmark, and with the frame and the first frame addition afterwards, makes when first frame is set to benchmark, reduces the error of motion vector detection.In the 3rd embodiment, consider this point.
Figure 69 is the block diagram that illustrates according to the structure example of the image pick-up device of third embodiment of the invention.
In the example of Figure 69, the frame memory 44 and frame memory 45 in second embodiment of Figure 64, video memory unit 4 also has frame memory 46.To write frame memory 44 and frame memory 46 from the view data of Date Conversion Unit 14.
In the 3rd embodiment, show frame memory 46 as storage as the memory of first frame of target (primitive frame and reference map picture frame) and always calculate system configuration with respect to the reference vector of the input picture of first image.In addition, in this structure, the addition image result is stored in the frame memory 45.
In addition, in this example, shown in the dotted line among Figure 69, also will write frame memory 45 as the view data of first frame of benchmark.
Then, second picture frame and picture frame are afterwards write frame memory 44, and offer hand and tremble vector detection unit 15.Hand is trembled vector detection unit 15 and is detected from each of second and afterwards picture frame of Date Conversion Unit 14 and tremble the vector and the anglec of rotation with adversary mutually between the view data of first frame that reads from frame memory 46.
Hand tremble vector detection unit 15 for CPU1 provide with respect to first picture frame about second with the detection of afterwards picture frame mutually the adversary tremble vector and detect the information of the anglec of rotation.
Under the control of CPU1, from frame memory 44, read and be stored in second in frame memory 44 and image afterwards, the adversary mutually who therefore eliminates between the benchmark image of second and the image afterwards that calculate and first frame trembles component.Then, with second and afterwards image offer the peaceful phase shift of rotation and add unit 19.According to control signal from CPU1, rotate peaceful phase shift and add unit 19 according to rotate second and each of afterwards picture frame with respect to the relative anglec of rotation of the first reference map picture frame, and with second and each and the picture frame addition of from frame memory 46, reading of afterwards picture frame, or this picture frame asked average.To write frame memory 45 as the picture frame of addition or average result.
Then, according to the control indication of CPU1, the data of the picture frame in the excision frame memory 45 are to have predetermined resolution and predetermined image size.The result is offered resolution conversion unit 16.Then, as mentioned above, resolution conversion unit 16 will offer coding decoder unit 17 as the image of the captured image data that write down, and will offer NTSC encoder 18 as the image of monitoring image data.
Incidentally, in above-mentioned the 3rd embodiment, described allow with as the unlimited addition of first frame of the input picture of benchmark image or the system of unlimited average addition.Yet, when the memory span that has abundance or when allowing to be stored in image in record and the regenerating unit unit 5 temporarily, can adopt and store the image that all will be added together in advance, and by contest additive process or average additive process with image addition method together.
Tremble correction with the optics hand of prior art and combine by trembling to proofread and correct, to obtain the further effect of enhancing according to the no transducer hand of above-mentioned first to the 3rd embodiment.
This be because, as begin described, the optics hand of use gyrosensor is trembled correction and is had advantage aspect the thick correction, but be difficult to proofread and correct rotation, and the no transducer hand that uses the piece coupling is trembled corrections (comprising rotation correction) and is had high accuracy, even but when use during according to the method for present embodiment, when the hunting zone broadens, the cost of SAD table can sharply increase, and takies the processing time under the situation that motion vector detection is handled in a plurality of stages.
Therefore, tremble to proofread and correct slightly to proofread and correct by the optics hand and tremble the hunting zone of the motion vector detection of correction to dwindle no transducer hand, and the motion vector in the calculating hunting zone does not have the transducer hand then and trembles correction, can realize that the hand of low cost, high accuracy and high speed is trembled corrective system.
[effect of embodiment]
According to the no transducer hand-shake correction method of the use piece coupling of first to the 3rd embodiment before cost, precision, processing time and stable aspect the no transducer rest image hand that all is better than up to now being proposed tremble alignment technique.
Commercially available all rest image hands are trembled the system that correction all provides the combination of use gyrosensor and optical correction (for example, lens move etc.) in market at present, but have caused very big error, and satisfied image quality can not be provided.On the other hand, according to the method for present embodiment by removing transducer and mechanical part and realized low-cost and the high accuracy hand being trembled correction.
[modification example]
Though in the description of embodiment before, the horizontal direction of reference vector is identical with reduce in scale factor on the vertical direction, as mentioned above, the reduce in scale factor on horizontal direction and the vertical direction can differ from one another.
In addition, though in embodiment before, all pixels in reference block and the object block are obtained sad value, for example, can only use the individual pixel of every k (k is a natural number) to obtain sad value.
The system that is used for the motion vector detection by real-time processing carries out the SAD operation usually, wherein, in order to reduce running cost and processing time, only the representative point in the object block is searched in reference block.
Particularly, shown in Figure 70, object block 103 is divided into a plurality of unit, and each unit forms by for example a plurality of n (horizontal direction) * m (vertical direction) pixel (n and m are the integer more than 1), and is set to representative point TP as a pixel in a plurality of pixels of division unit.Then, in the SAD operation, only use a plurality of representative point TP that object block 103 so is provided with.
On the other hand, make pixels all in the reference block 106 stand the sad value operation.For in the object block 103 one represent TP, use to be included in by as all pixels among the scope AR that a plurality of n * the m pixel forms of division unit, wherein, representative point TP is arranged in the reference block 106.
Then, between object block 103 and reference block 106, obtain the pixel value of each representative point TP in the reference frame 102 and be included in summation corresponding to the difference between the pixel value of the pixel of a plurality of n * m among each scope AR of each representative point in the reference block 106.Then, the difference summation that in the object block 103 all representative point TP is obtained is added together.This result is an element value in the SAD table.
Then, all reference blocks in object block 103 hunting zones are carried out the Difference Calculation of use representative point TP similar to the above, thereby generate the SAD table.Yet in this example, a plurality of reference blocks that pixel by above-mentioned a plurality of n * m as division unit or the integral multiple by a plurality of pixels will be arranged in the hunting zone separate each other.
As mentioned above, when representative point is used for object block, in the calculating of sad value, only there is a memory access to satisfy a representative point TP in the object block, simultaneously a plurality of pixels among the scope AR in the reference block carried out memory access.Therefore, can reduce number of memory accesses greatly.
In addition, when only using representative point TP, be enough to only store pixel data as the representative points TP in all pixels in the object block of object block data.Therefore, can reduce to store the frame storage content of the data of primitive frame (target frame) object block.
In addition, by small size representative point memory (SRAM) is remained in the local storage as the local storage that separates with frame memory and with the data of the object block of primitive frame (target frame), can reduce the frequency band of memory 4 (DRAM).
Though the above description of the processing of the representative point of use object block need not illustrate based on the described method of reference Figure 71 to Figure 73, more than describes the method that also is applicable to second embodiment that describes according to reference Figure 65 to Figure 68.
When in according to the method for second embodiment, only using the representative point TP of object block, each input pixel for reference frame, in whole hunting zone, detect all reference blocks, and definite representative point is corresponding to the representative point in the object block of the scope AR of each in all detection reference blocks with the scope AR that comprises input pixel (location of pixels is different in scope AR).
Then, to from the memory of the view data of storage primitive frame (target frame), read respectively as the pixel value of determining a plurality of representative points that the result obtains, poor between the pixel value of calculation representative point and the input pixel respectively, and with result of calculation be accumulated in SAD show in the coordinate position place of corresponding reference block (reference vector).
In this case, carry out memory access, therefore, can reduce number of memory accesses greatly only to read the representative point in the object block.
Incidentally, need not illustrate, the processing of using representative point can be applied to use the situation of the above-mentioned SAD of dwindling table.
Incidentally, in embodiment before, only use the difference value and the sad value of the brightness value Y calculating pixel of pixel.Yet not only brightness value Y can be used for motion vector detection, but also color difference components Cb/Cr can be applied to motion vector detection.In addition, can before being converted into brightness value Y and color difference components Cb/Cr, the RAW data motion vector detection of RAW data be handled by Date Conversion Unit 14 execution.
In addition, as mentioned above, hand is trembled vector detection unit 15 and is not limited to structure based on hardware handles, can also realize by software.
It should be appreciated by those skilled in the art, multiple modification, combination, recombinant and improvement to be arranged, all should be included within the scope of claim of the present invention or equivalent according to designing requirement and other factors.

Claims (20)

1. image processing apparatus comprises:
Every block motion vector calculation element, being used to calculate with the picture is motion vector between two pictures of image of unit sequence input, by a picture being divided into execution block coupling in each of zoning that a plurality of zone obtains, and calculate each every block motion vector in the described zoning;
The translational movement calculation element is used for a plurality of described every block motion vector of calculating according to by described every block motion vector calculation element, with respect to one in described two pictures translational movement that calculates in described two pictures another;
Anglec of rotation calculation element is used for a plurality of described every block motion vector of calculating according to by described every block motion vector calculation element, with respect to one in described two pictures anglec of rotation of calculating in described two pictures another; And
Rotate peaceful phase shift feeder apparatus, be used to use the translational movement that calculates by described translational movement calculation element and the anglec of rotation calculated by described anglec of rotation calculation element with a plurality of picture mutual superposition.
2. image processing apparatus according to claim 1,
Wherein, divide a described picture with matrix form.
3. image processing apparatus according to claim 1,
Wherein, described translational movement calculation element simply on average calculates described translational movement by the described a plurality of every block motion vector component on the direction that obtains described translational movement.
4. image processing apparatus according to claim 1,
Wherein, the described translational movement of described every block motion vector that use is calculated by described every block motion vector calculation element and each described zoning of obtaining according to described every block motion vector, described anglec of rotation calculation element calculates the described anglec of rotation, as the minimized described anglec of rotation of sum of the deviations between the theoretical every block motion vector that makes described every block motion vector and calculate as the unknown anglec of rotation function of described a plurality of zonings.
5. image processing apparatus according to claim 1 further comprises:
The overall motion vector calculation element, one that is used for respect to described two pictures is calculated another whole overall motion vector in described two pictures; And
Apparatus for evaluating is used to use described overall motion vector, assesses in described a plurality of every block motion vectors of being tried to achieve by described every block motion vector calculation element each;
Wherein, the quantity that provides described every block motion vector of high assessed value by described apparatus for evaluating is removed from the picture by the peaceful phase shift feeder apparatus of described rotation mutual superposition less than in described two pictures of predetermined threshold another.
6. image processing apparatus according to claim 1 further comprises:
The overall motion vector calculation element is used for respect to another whole overall motion vector in described two pictures of calculating of described two pictures; And
Apparatus for evaluating is used to use described overall motion vector, assesses in described a plurality of every block motion vectors of being tried to achieve by described every block motion vector calculation element each;
Wherein, described translational movement calculation element and described anglec of rotation calculation element only calculate the described translational movement and the described anglec of rotation according to a plurality of described every block motion vector that is provided high assessed value by described apparatus for evaluating.
7. image processing apparatus according to claim 1, further comprise: the overall motion vector calculation element, be used for calculating described another whole overall motion vector of two pictures according to the result of described coupling of described every block motion vector calculation element
Wherein, described every block motion vector calculation element is carried out the processing of another described coupling in repeatedly described two pictures, the described overall motion vector skew that hunting zone each time obtains according to the result by preceding once piece coupling, described overall motion vector is calculated by described overall motion vector calculation element, and each described hunting zone all is narrower than previous hunting zone, and
Described translational movement calculation element and described anglec of rotation calculation element calculate the described translational movement and the described anglec of rotation according to a plurality of every block motion vector that the last piece coupling of described every block motion vector calculation element is obtained.
8. image processing apparatus according to claim 7, further comprise: apparatus for evaluating, be used for using obtain, described another the whole overall motion vector of two pictures of result by each piece coupling, each in the every block motion vector that is obtained is mated in assessment by this time piece;
Wherein, calculate in the process of overall motion vector in next time, described overall motion vector calculation element is removed the object block that has obtained to be judged as by the assessment of described apparatus for evaluating every block motion vector of low reliability from calculating object; And
Every block motion vector of the object block in a plurality of every block motion vector that described translational movement calculation element and described anglec of rotation calculation element are only obtained according to the last piece coupling by described every block motion vector calculation element, except the object block of removing from described calculating object calculates the described translational movement and the described anglec of rotation.
9. image processing apparatus according to claim 7, further comprise: apparatus for evaluating, be used for using obtain, described another the whole overall motion vector of two pictures of result by each piece coupling, each in a plurality of every block motion vector that obtains is mated in assessment by this time piece;
Wherein, the quantity that provides described every block motion vector of high assessed value by described apparatus for evaluating is removed from the picture by the peaceful phase shift feeder apparatus of described rotation mutual superposition less than in described two pictures of predetermined threshold another.
10. image processing apparatus according to claim 7, further comprise: apparatus for evaluating, be used for using obtain, described another the whole overall motion vector of two pictures of result by each piece coupling, each in a plurality of every block motion vector that obtains is mated in assessment by this time piece;
Wherein, when the number that is provided described every block motion vector of high assessed value by described apparatus for evaluating is equal to or greater than predetermined number, recomputate the overall motion vector of this time according to the described every block motion vector that is presented high assessed value, and determine the described skew of next hunting zone based on the overall motion vector that recomputates.
11. image processing apparatus according to claim 1 further comprises:
Error calculating device is used for obtaining the error of the described anglec of rotation calculated with respect to the described translational movement that is calculated by described translational movement calculation element with by described anglec of rotation calculation element by the translational movement of each expression of described every block motion vector and the anglec of rotation;
Determine device, whether the summation of described error that is used for determining described a plurality of every block motion vectors of being obtained by described error calculating device is less than predetermined threshold; And
Control device is used for carrying out control, makes the described sum of the deviations of determining described a plurality of every block motion vectors when described definite device during less than described predetermined threshold, carries out the processing in the peaceful phase shift feeder apparatus of described rotation.
12. image processing apparatus according to claim 11,
Wherein, when described definite device determines that the summation of described a plurality of every block motion vectors is equal to or greater than described predetermined threshold, removal is corresponding to described every block motion vector of the worst error that is obtained by described error calculating device, and recomputates described translational movement that is calculated by described translational movement calculation element and the described anglec of rotation of being calculated by described anglec of rotation calculation element.
13. image processing apparatus according to claim 1,
Wherein, the peaceful phase shift feeder apparatus of described rotation makes described a plurality of picture carry out simple addition.
14. image processing apparatus according to claim 1,
Wherein, the peaceful phase shift feeder apparatus of described rotation makes the addition of described a plurality of picture value of averaging.
15. image processing apparatus according to claim 1,
Wherein, the peaceful phase shift feeder apparatus of described rotation makes described a plurality of picture carry out the contest addition.
16. an image pick-up device comprises:
Image pickup units;
Every block motion vector calculation element, be used to calculate from the motion vector between two pictures of the captured image of described image pickup units, by a picture being divided into execution block coupling in each of zoning that a plurality of zone obtains, and calculate each every block motion vector in the described zoning;
The translational movement calculation element is used for a plurality of described every block motion vector of calculating according to by described every block motion vector calculation element, with respect to one in described two pictures translational movement that calculates in described two pictures another;
Anglec of rotation calculation element is used for a plurality of described every block motion vector of calculating according to by described every block motion vector calculation element, with respect to one in described two pictures anglec of rotation of calculating in described two pictures another;
Rotate peaceful phase shift feeder apparatus, be used to use translational movement that calculates by described translational movement calculation element and the anglec of rotation of calculating by described anglec of rotation calculation element, will be from a plurality of picture mutual superposition of the described captured image of described image pickup units; And
Tape deck, the data of the described captured image that is used for to a plurality of picture mutual superposition to be obtained are recorded in recording medium.
17. image pick-up device according to claim 16,
Wherein, the peaceful phase shift feeder apparatus of described rotation comprises:
Simple adding device is used to make described a plurality of picture to carry out simple addition;
The mean value adding device is used to make the addition of described a plurality of picture value of averaging;
The contest adding device is used to make described a plurality of picture to carry out the contest addition; And
Choice device be used for selecting described simple adding device, described mean value adding device, described contest adding device one, and described image pick-up device comprises further:
User's input receiver is used for receiving user's selection operation input which that be used for specifying described simple adding device, described mean value adding device and described contest adding device will be used by the peaceful phase shift feeder apparatus of described rotation; And
Control device is used for controlling described choice device according to being imported by user's selection operation of described user's input receiver reception.
18. an image processing method may further comprise the steps:
Calculating is the motion vector between two pictures of image of unit sequence input with the picture, by a picture being divided into execution block coupling in each of zoning that a plurality of zone obtains, and calculate each every block motion vector in the described zoning;
According to a plurality of described every block motion vector that in described every block motion vector calculation procedure, calculates, with respect to one in described two pictures translational movement that calculates in described two pictures another;
According to a plurality of described every block motion vector that in described every block motion vector calculation procedure, calculates, with respect to one in described two pictures anglec of rotation of calculating in described two pictures another; And
The translational movement that use is calculated in described translational movement calculation procedure and the anglec of rotation of calculating in described anglec of rotation calculation procedure make a plurality of picture mutual superposition.
19. an image pickup method may further comprise the steps:
Calculating is from the motion vector between two pictures of the captured image of image pickup units, by a picture being divided into execution block coupling in each of zoning that a plurality of zone obtains, and calculate each every block motion vector in the described zoning;
According to a plurality of described every block motion vector that in described every block motion vector calculation procedure, calculates, with respect to one in described two pictures translational movement that calculates in described two pictures another;
According to a plurality of described every block motion vector that in described every block motion vector calculation procedure, calculates, with respect to one in described two pictures anglec of rotation of calculating in described two pictures another; And
The translational movement that use is calculated in described translational movement calculation procedure and the anglec of rotation of calculating in described anglec of rotation calculation procedure will be from a plurality of picture mutual superposition of the described captured image of described image pickup units; And
To add in the peaceful phase shift of described rotation makes the data of the described captured image that a plurality of picture mutual superposition are obtained be recorded on the recording medium in the step.
20. an image processing apparatus comprises:
Every block motion vector calculating part, being configured to calculate with the picture is motion vector between two pictures of image of unit sequence input, by a picture being divided into execution block coupling in each of zoning that a plurality of zone obtains, and calculate each every block motion vector in the described zoning;
The translational movement calculating part is configured to a plurality of described every block motion vector of calculating according to by described every block motion vector calculating part, with respect to one in described two pictures translational movement that calculates in described two pictures another;
Anglec of rotation calculating part is configured to described every a plurality of motion vector of calculating according to by described every block motion vector calculating part, with respect to one in described two pictures anglec of rotation of calculating in described two pictures another; And
Rotate peaceful phase shift and add portion, be configured to use the translational movement that calculates by described translational movement calculating part and the anglec of rotation calculated by described anglec of rotation calculating part with a plurality of picture mutual superposition.
CN2007101086584A 2006-06-14 2007-06-14 image processing device and method, image pickup device and method Expired - Fee Related CN101090456B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2006164209 2006-06-14
JP2006-164209 2006-06-14
JP2006164209A JP4178480B2 (en) 2006-06-14 2006-06-14 Image processing apparatus, image processing method, imaging apparatus, and imaging method

Publications (2)

Publication Number Publication Date
CN101090456A true CN101090456A (en) 2007-12-19
CN101090456B CN101090456B (en) 2011-12-07

Family

ID=38353705

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2007101086584A Expired - Fee Related CN101090456B (en) 2006-06-14 2007-06-14 image processing device and method, image pickup device and method

Country Status (5)

Country Link
US (1) US7817185B2 (en)
EP (1) EP1868389A2 (en)
JP (1) JP4178480B2 (en)
KR (1) KR20070119525A (en)
CN (1) CN101090456B (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101600113B (en) * 2008-06-02 2012-02-08 索尼株式会社 Image processor and image processing method
CN102498718A (en) * 2009-07-03 2012-06-13 法国电信公司 Prediction of a movement vector of a current image partition having a different geometric shape or size from that of at least one adjacent reference image partition and encoding and decoding using one such prediction
CN102725774A (en) * 2010-01-08 2012-10-10 日本电气株式会社 Similarity degree calculation device, similarity degree calculation method, and program
CN103150542A (en) * 2011-12-06 2013-06-12 联咏科技股份有限公司 Method of searching for moving tiny object in dynamic image
CN103854266A (en) * 2012-12-05 2014-06-11 三星泰科威株式会社 Method and apparatus for processing image
CN103999448A (en) * 2011-11-28 2014-08-20 Ati科技无限责任公司 Method and apparatus for correcting rotation of video frames
CN104122770A (en) * 2013-04-27 2014-10-29 京东方科技集团股份有限公司 Deviation supplement and correction method and deviation supplement and correction device
CN104182940A (en) * 2014-08-20 2014-12-03 苏州阔地网络科技有限公司 Blurred image restoration method and system
CN105979135A (en) * 2015-03-12 2016-09-28 佳能株式会社 Image processing apparatus, image processing method
CN106231310A (en) * 2010-04-05 2016-12-14 三星电子株式会社 For the method and apparatus performing interpolation based on conversion and inverse transformation
CN106464805A (en) * 2014-05-19 2017-02-22 株式会社岛津制作所 Image-processing device
CN106651918A (en) * 2017-02-16 2017-05-10 国网上海市电力公司 Method for extracting foreground under shaking background
CN107209940A (en) * 2015-02-06 2017-09-26 高通股份有限公司 Use environment flash lamp ambient image detects the moving region in scene
CN107409167A (en) * 2015-01-15 2017-11-28 株式会社岛津制作所 Image processing apparatus
CN108668074A (en) * 2017-03-28 2018-10-16 佳能株式会社 Image blur compensation device and its control method, picture pick-up device and storage medium
CN109672818A (en) * 2017-10-16 2019-04-23 华为技术有限公司 A kind of method and device adjusting picture quality
CN109691085A (en) * 2016-09-14 2019-04-26 富士胶片株式会社 Photographic device and camera shooting control method
CN109891877A (en) * 2016-10-31 2019-06-14 Eizo株式会社 Image processing apparatus, image display device and program
CN110458820A (en) * 2019-08-06 2019-11-15 腾讯科技(深圳)有限公司 A kind of multimedia messages method for implantation, device, equipment and storage medium
CN110622210A (en) * 2017-05-18 2019-12-27 三星电子株式会社 Method and apparatus for processing 360 degree images
CN112084286A (en) * 2020-09-14 2020-12-15 智慧足迹数据科技有限公司 Spatial data processing method and device, computer equipment and storage medium
CN112383677A (en) * 2020-11-04 2021-02-19 三星电子(中国)研发中心 Video processing method and device
CN112422773A (en) * 2020-10-19 2021-02-26 慧视江山科技(北京)有限公司 Electronic image stabilization method and system based on block matching

Families Citing this family (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8130330B2 (en) * 2005-12-05 2012-03-06 Seiko Epson Corporation Immersive surround visual fields
JP2007325027A (en) * 2006-06-01 2007-12-13 Oki Electric Ind Co Ltd Image segmentation method, image segmentation apparatus, and image segmentation program
JP2008219141A (en) * 2007-02-28 2008-09-18 Sanyo Electric Co Ltd Motion vector detector, image encoder and imaging apparatus employing the same
FR2919143B1 (en) * 2007-07-16 2009-10-23 Commissariat Energie Atomique METHOD AND DEVICE FOR SIGMA-DELTA ALGORITHM MOTION DETECTION WITH ADAPTIVE THRESHOLD
US8660175B2 (en) * 2007-12-10 2014-02-25 Qualcomm Incorporated Selective display of interpolated or extrapolated video units
JP4506875B2 (en) 2008-05-19 2010-07-21 ソニー株式会社 Image processing apparatus and image processing method
CN102318203B (en) * 2008-11-12 2014-10-08 汤姆逊许可证公司 Method and equipment used for encoding video frame containing light change
US20110216828A1 (en) * 2008-11-12 2011-09-08 Hua Yang I-frame de-flickering for gop-parallel multi-thread viceo encoding
CN102217308B (en) * 2008-11-13 2014-10-22 汤姆森特许公司 Multiple thread video encoding using gop merging and bit allocation
US8107750B2 (en) * 2008-12-31 2012-01-31 Stmicroelectronics S.R.L. Method of generating motion vectors of images of a video sequence
CA2755737A1 (en) * 2009-03-18 2010-09-23 Saab Ab Calculating time to go and size of an object based on scale correlation between images from an electro optical sensor
US9208690B2 (en) * 2009-03-18 2015-12-08 Saab Ab Calculating time to go and size of an object based on scale correlation between images from an electro optical sensor
US9113169B2 (en) * 2009-05-07 2015-08-18 Qualcomm Incorporated Video encoding with temporally constrained spatial dependency for localized decoding
JP5349210B2 (en) * 2009-08-31 2013-11-20 株式会社ニデック Fundus image processing device
JP2011254125A (en) * 2010-05-31 2011-12-15 Sony Corp Image processing device, camera system, image processing method, and program
KR101030744B1 (en) * 2010-08-20 2011-04-26 엘아이지넥스원 주식회사 Moving picture compression apparatus and method, and moving picture compression/decompression system and method
JP2012075088A (en) 2010-09-03 2012-04-12 Pentax Ricoh Imaging Co Ltd Image processing system and image processing method
EP2732615A1 (en) * 2011-07-13 2014-05-21 Entropic Communications, Inc. Method and apparatus for motion estimation in video image data
JP5988213B2 (en) * 2011-11-28 2016-09-07 パナソニックIpマネジメント株式会社 Arithmetic processing unit
KR101449435B1 (en) 2012-01-09 2014-10-17 삼성전자주식회사 Method and apparatus for encoding image, and method and apparatus for decoding image based on regularization of motion vector
TWI466538B (en) * 2012-01-20 2014-12-21 Altek Corp Method for image processing and apparatus using the same
ITVI20120087A1 (en) 2012-04-17 2013-10-18 St Microelectronics Srl DIGITAL VIDEO STABILIZATION
JP6074198B2 (en) * 2012-09-12 2017-02-01 キヤノン株式会社 Image processing apparatus and image processing method
US9111444B2 (en) * 2012-10-31 2015-08-18 Raytheon Company Video and lidar target detection and tracking system and method for segmenting moving targets
US9596481B2 (en) * 2013-01-30 2017-03-14 Ati Technologies Ulc Apparatus and method for video data processing
JP6135220B2 (en) * 2013-03-18 2017-05-31 富士通株式会社 Movie processing apparatus, movie processing method, and movie processing program
JP6370140B2 (en) * 2014-07-16 2018-08-08 キヤノン株式会社 Zoom control device, imaging device, control method for zoom control device, control program for zoom control device, and storage medium
JP6390275B2 (en) * 2014-09-01 2018-09-19 株式会社ソシオネクスト Encoding circuit and encoding method
JP6394876B2 (en) * 2014-09-17 2018-09-26 株式会社ソシオネクスト Encoding circuit and encoding method
TWI562635B (en) * 2015-12-11 2016-12-11 Wistron Corp Method and Related Camera Device for Generating Pictures with Object Moving Trace
TWI721816B (en) * 2017-04-21 2021-03-11 美商時美媒體公司 Systems and methods for game-generated motion vectors
WO2018212514A1 (en) * 2017-05-18 2018-11-22 삼성전자 주식회사 Method and apparatus for processing 360-degree image
CN108648173A (en) * 2018-03-30 2018-10-12 湖北工程学院 It is merged facial mask method for correcting position and device
EP4325849A3 (en) 2018-11-22 2024-04-17 Beijing Bytedance Network Technology Co., Ltd. Coordination method for sub-block based inter prediction
CN109788200B (en) * 2019-01-31 2021-04-06 长安大学 Camera stability control method based on predictive analysis
RU2701058C1 (en) * 2019-04-12 2019-09-24 Общество с ограниченной ответственностью "Научно-производственная фирма "САД-КОМ" Method of motion compensation and device for its implementation
CN114208184A (en) 2019-08-13 2022-03-18 北京字节跳动网络技术有限公司 Motion accuracy in sub-block based inter prediction
WO2021027862A1 (en) * 2019-08-13 2021-02-18 Beijing Bytedance Network Technology Co., Ltd. Motion precision in sub-block based inter prediction
TWI733188B (en) * 2019-09-11 2021-07-11 瑞昱半導體股份有限公司 Apparatus and method for motion estimation of isolated objects
WO2021052504A1 (en) 2019-09-22 2021-03-25 Beijing Bytedance Network Technology Co., Ltd. Scaling method for sub-block based inter prediction
US11494881B2 (en) * 2020-12-29 2022-11-08 Hb Innovations, Inc. Global movement image stabilization systems and methods

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07283999A (en) 1994-04-07 1995-10-27 Sony Corp Image synthesizer and image photographing device
JP3006560B2 (en) 1997-09-10 2000-02-07 日本電気株式会社 Alignment device and computer-readable recording medium storing alignment program
JP3679988B2 (en) * 2000-09-28 2005-08-03 株式会社東芝 Image processing apparatus and image processing method
JP4639555B2 (en) 2001-08-31 2011-02-23 ソニー株式会社 Motion vector detection apparatus and method, camera shake correction apparatus and method, and imaging apparatus
WO2004062270A1 (en) * 2002-12-26 2004-07-22 Mitsubishi Denki Kabushiki Kaisha Image processor
JP4613510B2 (en) 2003-06-23 2011-01-19 ソニー株式会社 Image processing method and apparatus, and program
JP4340968B2 (en) * 2004-05-07 2009-10-07 ソニー株式会社 Image processing apparatus and method, recording medium, and program
JP4408779B2 (en) * 2004-09-15 2010-02-03 キヤノン株式会社 Image processing device
JP4507855B2 (en) * 2004-11-25 2010-07-21 ソニー株式会社 Image capturing apparatus control method, control apparatus, and control program
JP4695972B2 (en) * 2005-12-14 2011-06-08 キヤノン株式会社 Image processing apparatus, imaging apparatus, and image processing method
JP4620607B2 (en) * 2006-02-24 2011-01-26 株式会社モルフォ Image processing device

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101600113B (en) * 2008-06-02 2012-02-08 索尼株式会社 Image processor and image processing method
CN102498718B (en) * 2009-07-03 2016-01-20 法国电信公司 There is the prediction of the motion-vector of the present image subregion of the geometry different from the geometry of at least one adjacent reference picture subregion or size or size and use the Code And Decode of a this prediction
CN102498718A (en) * 2009-07-03 2012-06-13 法国电信公司 Prediction of a movement vector of a current image partition having a different geometric shape or size from that of at least one adjacent reference image partition and encoding and decoding using one such prediction
US8855373B2 (en) 2010-01-08 2014-10-07 Nec Corporation Similarity calculation device, similarity calculation method, and program
CN102725774A (en) * 2010-01-08 2012-10-10 日本电气株式会社 Similarity degree calculation device, similarity degree calculation method, and program
CN102725774B (en) * 2010-01-08 2015-06-17 日本电气株式会社 Similarity degree calculation device, similarity degree calculation method, and program
CN106231310B (en) * 2010-04-05 2019-08-13 三星电子株式会社 Method and apparatus for executing interpolation based on transformation and inverse transformation
CN106231310A (en) * 2010-04-05 2016-12-14 三星电子株式会社 For the method and apparatus performing interpolation based on conversion and inverse transformation
CN103999448A (en) * 2011-11-28 2014-08-20 Ati科技无限责任公司 Method and apparatus for correcting rotation of video frames
CN103150542A (en) * 2011-12-06 2013-06-12 联咏科技股份有限公司 Method of searching for moving tiny object in dynamic image
CN103854266A (en) * 2012-12-05 2014-06-11 三星泰科威株式会社 Method and apparatus for processing image
CN103854266B (en) * 2012-12-05 2017-11-03 韩华泰科株式会社 Method and apparatus for being handled image
CN104122770A (en) * 2013-04-27 2014-10-29 京东方科技集团股份有限公司 Deviation supplement and correction method and deviation supplement and correction device
CN106464805B (en) * 2014-05-19 2019-12-27 株式会社岛津制作所 Image processing apparatus
CN106464805A (en) * 2014-05-19 2017-02-22 株式会社岛津制作所 Image-processing device
US10504210B2 (en) 2014-05-19 2019-12-10 Shimadzu Corporation Image-processing device
CN104182940A (en) * 2014-08-20 2014-12-03 苏州阔地网络科技有限公司 Blurred image restoration method and system
CN107409167A (en) * 2015-01-15 2017-11-28 株式会社岛津制作所 Image processing apparatus
CN107409167B (en) * 2015-01-15 2020-01-21 株式会社岛津制作所 Image processing apparatus
CN107209940A (en) * 2015-02-06 2017-09-26 高通股份有限公司 Use environment flash lamp ambient image detects the moving region in scene
CN105979135A (en) * 2015-03-12 2016-09-28 佳能株式会社 Image processing apparatus, image processing method
CN105979135B (en) * 2015-03-12 2019-06-25 佳能株式会社 Image processing equipment and image processing method
US11297240B2 (en) 2016-09-14 2022-04-05 Fujifilm Corporation Imaging device and imaging control method capable of preventing camera shake
CN109691085A (en) * 2016-09-14 2019-04-26 富士胶片株式会社 Photographic device and camera shooting control method
US10812723B2 (en) 2016-09-14 2020-10-20 Fujifilm Corporation Imaging device and imaging control method capable of preventing camera shake
CN109891877A (en) * 2016-10-31 2019-06-14 Eizo株式会社 Image processing apparatus, image display device and program
CN106651918A (en) * 2017-02-16 2017-05-10 国网上海市电力公司 Method for extracting foreground under shaking background
CN106651918B (en) * 2017-02-16 2020-01-31 国网上海市电力公司 Foreground extraction method under shaking background
CN108668074A (en) * 2017-03-28 2018-10-16 佳能株式会社 Image blur compensation device and its control method, picture pick-up device and storage medium
CN108668074B (en) * 2017-03-28 2020-11-27 佳能株式会社 Image blur correction device, control method thereof, image pickup apparatus, and storage medium
CN110622210A (en) * 2017-05-18 2019-12-27 三星电子株式会社 Method and apparatus for processing 360 degree images
CN109672818A (en) * 2017-10-16 2019-04-23 华为技术有限公司 A kind of method and device adjusting picture quality
CN110458820A (en) * 2019-08-06 2019-11-15 腾讯科技(深圳)有限公司 A kind of multimedia messages method for implantation, device, equipment and storage medium
CN112084286A (en) * 2020-09-14 2020-12-15 智慧足迹数据科技有限公司 Spatial data processing method and device, computer equipment and storage medium
CN112084286B (en) * 2020-09-14 2021-06-29 智慧足迹数据科技有限公司 Spatial data processing method and device, computer equipment and storage medium
CN112422773A (en) * 2020-10-19 2021-02-26 慧视江山科技(北京)有限公司 Electronic image stabilization method and system based on block matching
CN112422773B (en) * 2020-10-19 2023-07-28 慧视江山科技(北京)有限公司 Electronic image stabilization method and system based on block matching
CN112383677A (en) * 2020-11-04 2021-02-19 三星电子(中国)研发中心 Video processing method and device

Also Published As

Publication number Publication date
JP2007336121A (en) 2007-12-27
EP1868389A2 (en) 2007-12-19
KR20070119525A (en) 2007-12-20
US20080175439A1 (en) 2008-07-24
JP4178480B2 (en) 2008-11-12
US7817185B2 (en) 2010-10-19
CN101090456B (en) 2011-12-07

Similar Documents

Publication Publication Date Title
CN101090456B (en) image processing device and method, image pickup device and method
CN100556082C (en) The aberration emendation method of photographic images and device, image pickup method and filming apparatus
CN101123684B (en) Taken-image signal-distortion compensation method, taken-image signal-distortion compensation apparatus, image taking method and image-taking apparatus
JP2008005084A (en) Image processor, image processing method, imaging apparatus, and imaging method
CN101790031B (en) Image processing apparatus, image processing method, and imaging apparatus
CN101562704B (en) Image processing apparatus and image processing method
JP4304528B2 (en) Image processing apparatus and image processing method
US7692688B2 (en) Method for correcting distortion of captured image, device for correcting distortion of captured image, and imaging device
CN101534447B (en) Image processing apparatus and image processing method
CN101420613B (en) Image processing device and image processing method
US7509039B2 (en) Image sensing apparatus with camera shake correction function
CN101895679B (en) Image capturing device and image capturing method
JP2009071689A (en) Image processing apparatus, image processing method, and imaging apparatus
JP2009105533A (en) Image processing device, imaging device, image processing method, and picked-up image processing method
JP4904925B2 (en) Image processing apparatus and image processing method
JP4189252B2 (en) Image processing apparatus and camera
JP2007323458A (en) Image processor and image processing method
JP4670630B2 (en) Image processing apparatus, image processing method, imaging apparatus, and imaging method
CN110930440B (en) Image alignment method, device, storage medium and electronic equipment
CN115714927A (en) Image forming apparatus with a plurality of image forming units
JP2010226212A (en) Image processor, and tracking system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20111207

Termination date: 20130614