CN100556153C - A kind of preprocess method of multi-view image - Google Patents

A kind of preprocess method of multi-view image Download PDF

Info

Publication number
CN100556153C
CN100556153C CNB2007101644985A CN200710164498A CN100556153C CN 100556153 C CN100556153 C CN 100556153C CN B2007101644985 A CNB2007101644985 A CN B2007101644985A CN 200710164498 A CN200710164498 A CN 200710164498A CN 100556153 C CN100556153 C CN 100556153C
Authority
CN
China
Prior art keywords
key point
source images
target image
image
prime
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2007101644985A
Other languages
Chinese (zh)
Other versions
CN101179745A (en
Inventor
邵枫
郁梅
蒋刚毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Guizhi Intellectual Property Service Co.,Ltd.
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CNB2007101644985A priority Critical patent/CN100556153C/en
Publication of CN101179745A publication Critical patent/CN101179745A/en
Application granted granted Critical
Publication of CN100556153C publication Critical patent/CN100556153C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a kind of preprocess method of multi-view image, at metric space target image and source images are carried out the extreme value detection by yardstick invariant features mapping algorithm, extract the yardstick invariant features transform characteristics vector of target image and source images key point, utilize the characteristic matching algorithm to obtain the key point pair set of source images and the accurate coupling of target image, and source images is carried out color correction by calculating the right property the taken advantage of sum of errors additive errors of key point, by key point between set up affine transformation relationship, correcting image behind the color correction is carried out geometric calibration, advantage is that the present invention is under the prerequisite that guarantees multi-view image color correction and geometric calibration accuracy, the robustness of color correction and the precision of mapping have been improved greatly, reduce the computation complexity of geometric calibration, improved the coding efficiency of multi-view image.

Description

A kind of preprocess method of multi-view image
Technical field
The present invention relates to a kind of processing method of multi-view image, especially relate to a kind of preprocess method of multi-view image.
Background technology
In real world, the vision content that the observer saw depends on the position of observer with respect to observed object, and the observer can freely select each different angle to remove to observe and analyze things.In traditional video system, real scene is selected decision with respect to the picture of a viewpoint by cameraman or director, the sequence of video images that the user can only watch video camera to be produced on single viewpoint passively, and can not freely select other viewpoint to observe real scene.The video sequence that these folk prescriptions make progress can only reflect a side of real-world scene.The free viewpoint video system can make the user freely select viewpoint to go to watch any side in the certain limit in the real-world scene, by the MPEG of International Standards Organization (Moving Picture Experts Group: be called the developing direction of video system of future generation Motion Picture Experts Group).
The multi-viewpoint video image technology is a core link in the free viewpoint video technology, and it can provide the video image information of the different angles of captured scene.Fig. 1 is the parallel camera system imaging of a many viewpoints schematic diagram, and wherein n+1 camera (or video camera) placed abreast to take multi-viewpoint video image.Utilize the information of a plurality of viewpoints in the multi-view point video signal can synthesize the image information of user-selected any viewpoint, reach the purpose of freely switching any visual point image.But since in gatherer process the baseline of each camera not at same trunnion axis, key elements such as scene illumination, camera CCD noise, shutter speed and exposure are inconsistent, can cause the color value difference of the image that different cameral gathers very big, and in vertical direction the skew of geometric position can take place, bring great difficulty for follow-up video coding, virtual viewpoint rendering and multi-view point video three-dimensional display.
Problem at above-mentioned existence, the method that a kind of typical multi-view image is handled has been proposed at present, as shown in Figure 2, the multi-view image that a plurality of cameras are collected, carry out pretreatment operation earlier, comprise color correction and geometric calibration, then pretreated image is encoded again, between the viewpoint of decoding, draw virtual visual point image at last.
It is the color correction means that adopt usually that zone coupling or histogram mate, and the zone coupling concerns in the most similar interregional color map of setting up, and with these mapping relations source images proofreaied and correct by target image and source images are carried out cluster segmentation; The histogram coupling has identical histogram distribution as long as satisfy source images with target image by calculating the accumulative histogram of target image and source images, just the histogram of target image can be mapped to source images.If but variations such as multi-view image rotates, yardstick convergent-divergent, view transformation, illumination conversion, or because block, the influence of factor such as noise, but zone coupling and histogram coupling just can not keep good matching, it is low to shine upon precision.
Geometric calibration need obtain the inner parameter of camera and the conversion that external parameter is realized coordinate system usually, thereby reaches the purpose of geometric position unanimity, but the method computation complexity that adopts this geometric calibration is than higher.
Summary of the invention
Technical problem to be solved by this invention provides and a kind ofly can guarantee effectively that the color of each visual point image is consistent with the geometric position, improves the multi-view image color correction and the geometric calibration method of the coding efficiency of multi-view image simultaneously.
The present invention solves the problems of the technologies described above the technical scheme that is adopted: a kind of preprocess method of multi-view image, and it may further comprise the steps:
(1) visual point image in the multi-view image that synchronization is taken by many view camera system is defined as target image, is designated as T, and other visual point image is defined as source images, is designated as S, the target image set of keypoints of from target image, extracting be designated as (T)P}, the source images set of keypoints of from source images, extracting be designated as (S)P}, the plane coordinates of objective definition image are x ' y ' coordinate system, and the plane coordinates of definition source images is the xy coordinate system;
(2) at metric space target image and source images are carried out the extreme value detection by yardstick invariant features mapping algorithm, extraction target image set of keypoints (T)P} and source images set of keypoints (S)The yardstick invariant features transform characteristics vector of each key point among the P};
(3) according to yardstick invariant features transform characteristics vector, obtain the source images key point by the characteristic matching algorithm (S)P (x 1, y 1) optimal candidate coupling crux point on target image (T)P (x 1', y 1'), and on source images, obtain the target image key point (T)P (x 1', y 1') optimal candidate coupling key point (S)P (x 2, y 2), then by two-way verification determine key point to ( (T)P (x 1', y 1', (S)P (x 1, y 1)) for the coupling key point right;
(4) target image and source image data are converted into the CIEXYZ color space from the RGB color space, be converted into the CIELAB color space from the CIEXYZ color space again, the 1st component of three components of CIELAB color space is that luminance component is designated as L, the 2nd component is that first color component is designated as a and the 3rd component is that second color component is designated as b;
(5) to the key point of target image and all couplings of source images to the key point pair set that constitutes ( (T)P (x 1', y 1'), (S)P (x 1, y 1)), by the absolute difference sum of L, a and each component of b is carried out minimization, calculate ( a i , e i ) = arg min a i , e i Σ ( x 1 , y 1 ) , ( x 1 ′ , y 1 ′ ) ∈ Ω abs ( I i ( T ) ( x 1 ′ , y 1 ′ ) - ( a i · I i ( S ) ( x 1 , y 1 ) + e i ) ) Obtain the property the taken advantage of error a of L, a and each component of b iWith additive errors e i, wherein, Ω is the key point pair set, (S)I i(x 1, y 1) be the color value of i component of source images, (T)I i(x 1', y 1') be the color value of i component of target image, i=1,2,3;
(6) with the property the taken advantage of error a of L, a and each component of b iWith additive errors e iL, a and each component of b to each pixel of source images carry out color correction, (C)I i(x 1, y 1)=a i (S)I i(x 1, y 1)+e i, wherein, (C)I i(x 1, y 1) be the color value of i the component of correcting image behind the color correction, i=1,2,3, and correcting image is transformed into the RGB color space;
(7) affine transformation of setting up from the pixel of correcting image to the pixel of target image is X ′ = x 1 ′ y 1 ′ = a 11 a 12 a 21 a 22 x 1 y 1 + b 1 b 2 = AX + B , The key point pair set ( (T)P (x 1', y 1'), (S)P (x 1, y 1)) in select the right position of the key point of three pairs of Euclidean distance minimums as initial value, calculate spin matrix A and translation vector B;
(8) with spin matrix A and translation vector B pixel (x to correcting image 1, y 1) pass through y 3=a 21x 1+ a 22y 1+ b 2On the y direction, carry out geometric calibration, obtain the pixel (x of image behind the geometric calibration 1, y 3).
The extraction of described yardstick invariant features transform characteristics vector comprises that the extraction of source images characteristic vector may further comprise the steps to the extraction of target image and source images characteristic vector:
A. pass through the conversion of yardstick invariant features at the pixel of metric space, by double gauss difference operator D (x to source images 1, y 1, σ)=(G (x 1, y 1, k σ)-G (x 1, y 1σ)) * I (x, y), calculate the response of the double gauss difference operator of pixel under the metric space different scale of source images, obtain the characteristic dimension geometric locus according to response again, on the characteristic dimension geometric locus, detect all Local Extremum, determine the scale size of the position and the present position of the preliminary key point of source images then according to Local Extremum, wherein, σ is the metric space factor, two-dimensional Gaussian function G ( x 1 , y 1 , σ ) = 1 2 π σ 2 e - ( x 1 2 + y 1 2 ) / 2 σ 2 , K is the product factor, x 1=(x 1, y 1, σ) TFor the extreme point position is designated as
Figure C20071016449800073
B. by the three-dimensional quadratic function of match, determine the scale size of the position and the present position of the final key point of source images, and whether the principal curvatures of judging this key point is less than the principal curvatures threshold value of setting, if, determine that then this key point is unsettled edge response point, and remove this unsettled edge response point; Otherwise, with the extreme point position of this key point
Figure C20071016449800074
Substitution metric space function D ( x ^ 1 ) = D + 1 2 ∂ D T ∂ x 1 x ^ 1 Obtain the metric space functional value
Figure C20071016449800076
Judge
Figure C20071016449800077
Whether less than the metric space threshold value of setting, if determine that then this key point is the low contrast key point, and remove this low contrast key point;
C. according to the metric space L (x of source images key point 1, y 1, σ)=G (x 1, y 1, σ) * I (x 1, y 1), by the directioin parameter of each key point after definite unsettled edge response point of removal of the direction distribution characteristics of source images key point neighborhood territory pixel and the low contrast key point, the source images directioin parameter comprises the big or small m (x of gradient 1, y 1) and direction θ (x 1, y 1), m ( x 1 , y 1 ) = ( L ( x 1 + 1 , y 1 ) - L ( x 1 - 1 , y 1 ) ) 2 + ( L ( x 1 , y 1 + 1 ) - L ( x 1 , y 1 - 1 ) ) 2 , θ(x 1,y 1)=tan -1((L(x 1,y 1+1)-L(x 1,y 1-1))/(L(x 1+1,y 1)-L(x 1-1,y 1)));
To the extraction of target image characteristic vector, in the extraction identical operations of x ' y ' plane coordinate system employing with the source images characteristic vector.
Described characteristic matching algorithm is: the key point pair set of target image and source images coupling (T)P (x 1', y 1'), (S)P (x 1, y 1)) in, for the source images key point (S)P (x 1, y 1), at first in maximum horizontal and vertical parallax hunting zone, the most similar key point on the ferret out image (T)P (x 1', y 1') with time similar key point (T)P (x 2', y 2'), calculating then with source images and target image key point is the interior Euclidean distance of N * N window at center, if Euclidean distance satisfies &Sigma; N E ( P ( S ) ( x 1 , y 1 ) - &mu; 1 , P ( T ) ( x 1 &prime; , y 1 &prime; ) - &mu; 1 &prime; ) 2 &Sigma; N E ( P ( S ) ( x 1 , y 1 ) - &mu; 1 , P ( T ) ( x 2 &prime; , y 2 &prime; ) - &mu; 2 &prime; ) 2 < &tau; 2 , Wherein, μ 1Be the source images key point (S)P (x 1, y 1) average in N * N window, μ 1' be the target image key point (T)P (x 1', y 1') average in N * N window, μ 2' be the target image key point (T)P (x 2', y 2') average in N * N window, N is the size of window, τ is a preset threshold, then thinks the most similar key point (T)P (x 1', y 1') be the source images key point (S)P (x 1, y 1) optimal candidate coupling on target image; For the target image key point (T)P (x 1', y 1') obtain optimal candidate coupling on source images by compute euclidian distances (S)P (x 2, y 2); Parallax from the source images to the target image is designated as d Sou → tar, d Sou → tar=(x 1'-x 1, y 1'-y 1), the parallax from the target image to the source images is designated as d Tar → sou, d Tar → sou=(x 2-x 1', y 2-y 1'), to d Sou → tarAnd d Tar → souCarry out two-way verification, if | d Sou → tar+ d Tar → sou|<2, then determine key point to ( (T)P (x 1', y 1'), (S)P (x 1, y 1)) for the coupling key point right.
Compared with prior art, the advantage of a kind of multi-view image color correction provided by the present invention and geometric calibration method is:
1) based on yardstick invariant features mapping algorithm, by extracting the yardstick invariant features transform characteristics vector that factors such as rotation, yardstick convergent-divergent, view transformation, illumination conversion are maintained the invariance, the coupling of feature between two width of cloth multi-view images that can realize differing greatly, compare with the existing operation of describing characteristics of image with zone or histogram, improved the robustness of color correction greatly;
2) according to yardstick invariant features transform characteristics vector, determine that by the characteristic matching algorithm key point of target image and all couplings of source images is right, improved the precision of mapping greatly;
3) utilize the property taken advantage of error a iWith additive errors e iL, a and each component of b to each pixel of source images carry out color correction, meet the image-forming principle of camera more, have improved the precision of multi-view image color correction;
4) affine transformation of the present invention's definition for the camera of rigid motion, does not need to obtain in advance the inner parameter and the external parameter of camera, greatly reduces the computation complexity of multi-view image geometric calibration like this;
5) multi-view image is carried out color correction and geometric calibration preliminary treatment after, improved the coding efficiency that multi-view image is encoded, and reduced the predicated error in estimation and the compensation process.
Description of drawings
Fig. 1 is the parallel camera system imaging of a many viewpoints schematic diagram;
Fig. 2 is a multi-view image handling process schematic diagram;
Fig. 3 is the coding structure schematic diagram of multi-view image;
Fig. 4 is the flow chart of the preprocess method of multi-view image of the present invention;
Fig. 5 a is the target image of " flamenco1 " many viewpoints test set;
Fig. 5 b is the source images of " flamenco1 " many viewpoints test set;
Fig. 5 c is the target image of " golf2 " many viewpoints test set;
Fig. 5 d is the source images of " golf2 " many viewpoints test set;
Fig. 6 a is the initial characteristic vector schematic diagram of target image of " flamenco1 " many viewpoints test set;
Fig. 6 b is the initial characteristic vector schematic diagram of source images of " flamenco1 " many viewpoints test set;
Fig. 6 c is the initial characteristic vector schematic diagram of target image of " golf2 " many viewpoints test set;
Fig. 6 d is the initial characteristic vector schematic diagram of source images of " golf2 " many viewpoints test set;
Fig. 7 a is the key point schematic diagram of target image after characteristic matching of " flamenco1 " many viewpoints test set;
Fig. 7 b is the key point schematic diagram of source images after characteristic matching of " flamenco1 " many viewpoints test set;
Fig. 7 c is the key point schematic diagram of target image after characteristic matching of " golf2 " many viewpoints test set;
Fig. 7 d is the key point schematic diagram of source images after characteristic matching of " golf2 " many viewpoints test set;
Fig. 8 a is the comparison schematic diagram of " flamenco1 " many viewpoints test set, and left side figure is a target image, and right figure is a correcting image;
Fig. 8 b is the comparison schematic diagram of " golf2 " many viewpoints test set, and left side figure is a target image, and right figure is a correcting image;
Fig. 9 a is the source images of " flamenco1 " many viewpoints test set color correction front and back and the aberration comparative result schematic diagram of target image;
Fig. 9 b is the source images of " golf2 " many viewpoints test set color correction front and back and the aberration comparative result schematic diagram of target image;
Figure 10 a is the comparison schematic diagram of " flamenco1 " many viewpoints test set, and left side figure is a target image, and right figure is the image behind geometric calibration;
Figure 10 b is the comparison schematic diagram of " golf2 " many viewpoints test set, and left side figure is a target image, and right figure is the image behind geometric calibration;
Figure 11 a be the source images of " flamenco1 " many viewpoints test set with source images Y component coding distortion performance curve ratio after color correction and geometric calibration preliminary treatment than schematic diagram;
Figure 11 b be the source images of " flamenco1 " many viewpoints test set with source images U component coding distortion performance curve ratio after color correction and geometric calibration preliminary treatment than schematic diagram;
Figure 11 c be the source images of " flamenco1 " many viewpoints test set with source images V component coding distortion performance curve ratio after color correction and geometric calibration preliminary treatment than schematic diagram;
Figure 12 a be the source images of " golf2 " many viewpoints test set with source images Y component coding distortion performance curve ratio after color correction and geometric calibration preliminary treatment than schematic diagram;
Figure 12 b be the source images of " golf2 " many viewpoints test set with source images U component coding distortion performance curve ratio after color correction and geometric calibration preliminary treatment than schematic diagram;
Figure 12 c be the source images of " golf2 " many viewpoints test set with source images V component coding distortion performance curve ratio after color correction and geometric calibration preliminary treatment than schematic diagram.
Embodiment
Embodiment describes in further detail the present invention below in conjunction with accompanying drawing.
At first describe the notion of yardstick invariant features of the present invention conversion below and ask for the problem that optimal candidate is mated key point by characteristic matching.
Yardstick invariant features conversion SIFT (Scale Invariant Feature Transform) algorithm at first carries out extreme value at metric space and detects, extract the SIFT characteristic vector that factors such as rotation, yardstick convergent-divergent, view transformation, illumination conversion are maintained the invariance then, the SIFT characteristic vector mainly comprises the position of key point, the scale size of key point present position and the directioin parameter of key point.
The extraction of the SIFT characteristic vector of one secondary multi-view image comprises that the extraction of source images characteristic vector may further comprise the steps to the extraction of target image and source images characteristic vector:
A. pass through the conversion of yardstick invariant features at the pixel of metric space, by double gauss difference operator D (x to source images 1, y 1, σ)=(G (x 1, y 1, k σ)-G (x 1, y 1σ)) * I (x, y), calculate the response of the double gauss difference operator of pixel under the metric space different scale of source images, obtain the characteristic dimension geometric locus according to response again, on the characteristic dimension geometric locus, detect all Local Extremum, determine the scale size of the position and the present position of the preliminary key point of source images then according to Local Extremum, wherein, σ is the metric space factor, two-dimensional Gaussian function G ( x 1 , y 1 , &sigma; ) = 1 2 &pi; &sigma; 2 e - ( x 1 2 + y 1 2 ) / 2 &sigma; 2 , K is the product factor, x 1=(x 1, y 1, σ) TFor the extreme point position is designated as
Figure C20071016449800102
B. by the three-dimensional quadratic function of match, determine the scale size of the position and the present position of the final key point of source images, and whether the principal curvatures of judging this key point is less than the principal curvatures threshold value of setting, if, determine that then this key point is unsettled edge response point, and remove this unsettled edge response point; Otherwise, with the extreme point position of this key point Substitution metric space function D ( x ^ 1 ) = D + 1 2 &PartialD; D T &PartialD; x 1 x ^ 1 Obtain the metric space functional value
Figure C20071016449800105
Judge Whether less than the metric space threshold value of setting, if determine that then this key point is the low contrast key point, and remove this low contrast key point;
C. according to the metric space L (x of source images key point 1, y 1, σ)=G (x 1, y 1, σ) * I (x 1, y 1), by the directioin parameter of each key point after definite unsettled edge response point of removal of the direction distribution characteristics of source images key point neighborhood territory pixel and the low contrast key point, the source images directioin parameter comprises the big or small m (x of gradient 1, y 1) and direction θ (x 1, y 1), m ( x 1 , y 1 ) = ( L ( x 1 + 1 , y 1 ) - L ( x 1 - 1 , y 1 ) ) 2 + ( L ( x 1 , y 1 + 1 ) - L ( x 1 , y 1 - 1 ) ) 2 , θ(x 1,y 1)=tan -1((L(x 1,y 1+1)-L(x 1,y 1-1))/(L(x 1+1,y 1)-L(x 1-1,y 1)))。
To the extraction of target image characteristic vector, in the extraction identical operations of x ' y ' plane coordinate system employing with the source images characteristic vector.
The principal curvatures threshold value of She Dinging gets 10 in the present embodiment, and the metric space threshold value of setting gets 0.03.
Yet, do not have necessary relation between the SIFT characteristic vector of the target image of extraction and source images, in order to obtain the source images key point (S)P (x 1, y 1) optimal candidate coupling on target image, at first from source images to the target image maximum horizontal with in the vertical parallax hunting zone, the most similar key point on the ferret out image (T)P (x 1', y 1') with time similar key point (T)P (x 2', y 2'), calculating with source images and target image key point then is Euclidean distance in the N * N window at center, if to the most similar key point of the most similar key point of target image (T)P (x 1', y 1') with time similar key point (Y)P (x 2', y 2') Euclidean distance satisfy &Sigma; N E ( P ( S ) ( x 1 , y 1 ) - &mu; 1 , P ( T ) ( x 1 &prime; , y 1 &prime; ) - &mu; 1 &prime; ) 2 &Sigma; N E ( P ( S ) ( x 1 , y 1 ) - &mu; 1 , P ( T ) ( x 2 &prime; , y 2 &prime; ) - &mu; 2 &prime; ) 2 < &tau; 2 , Wherein, μ 1Be the source images key point (S)P (x 1, y 1) average in N * N window, μ 1' be the target image key point (T)P (x 1', y 1') average in N * N window, μ 2' be the target image key point (T)P (x 2', y 2') average in N * N window, N is the size of window, τ is a preset threshold, then thinks the most similar key point (T)P (x 1', y 1') be the source images key point (S)P (x 1, y 1) optimal candidate coupling key point on target image, the parallax d from the source images to the target image Sou → tar=(x 1-x 1', y 1'-y 1); For the target image key point (T)P (x 1', y 1'), from target image to the source images maximum horizontal with in the vertical parallax hunting zone, can on source images, obtain optimal candidate coupling key point equally by compute euclidian distances (S)P (x 2, y 2), the parallax d from the target image to the source images Tar → sou=(x 2-x 1', y 2-y 1').To parallax d Sou → tarAnd d Tar → souCarry out two-way verification, if | d Sou → tar+ d Tar → sou|<2, then determine (S)P (x 1, y 1) with (T)P (x 1', y 1') be that the key point of a pair of coupling is right.In the present embodiment, the quality that the key point of window size N and target image and source images coupling is right is relevant, excessive or too smallly all can not obtain more accurate coupling, our experiments show that when being the center with current key point, both sides respectively are 3 pixels, and promptly effect is best during N=7; Threshold tau is in theory between [0,1] between, quality and quantity that τ key point big or small and target image and source images coupling is right are relevant, when τ=1, all key points are all mated, and the quality of its coupling reduces, when τ=0, therefore all key points all are unmatched, should take all factors into consideration when choosing τ, our experiments show that τ=0.8 o'clock is that effect is best.
Ask at above-mentioned yardstick invariant features mapping algorithm with by characteristic matching on the basis of optimal candidate coupling key point, as follows in conjunction with the preprocess method step of Fig. 4 multi-view image of the present invention:
At first a visual point image in the multi-view image that the different cameral synchronization is taken by many view camera system is as target image, and other visual point image is as source images, the target image set of keypoints of extracting from target image be designated as (T)P}, the source images set of keypoints of extracting from source images be designated as (S)P}, the plane coordinates of objective definition image are x ' y ' coordinate system, and the plane coordinates of definition source images is the xy coordinate system.
At metric space target image and source images are carried out extreme value by SIFT and detect, extract the target image set of keypoints (T)P}={ (T)P (1), (T)P (2)..., (T)P (M)And the source images set of keypoints (S)P}={ (S)P (1), (S)P (2)..., (S)P (N)In the SIFT characteristic vector of each key point, comprise the position of key point, the scale size of key point present position and the directioin parameter of key point, wherein, M and N are respectively the key point number of target image and source images.
According to the SIFT characteristic vector of said extracted, the target image set of keypoints (T)P} and source images set of keypoints (S)Carry out characteristic matching between the P}, with the key point of all couplings to the key point pair set that constitutes be expressed as ( (T)P (x 1', y 1'), (S)P (x 1, y 1)).
With the data of target image and source images from the RGB color space conversion to the CIELAB color space, the 1st component of three components of CIELAB color space is that luminance component is designated as L, the 2nd component is that first color component is designated as a and the 3rd component is that second color component is designated as b.The CIELAB color space is to obtain on the basis of CIEXYZ color space, conversion from the RGB color space to the CIELAB color space, at first need from the RGB color notation conversion space to the CIEXYZ color space, and then from the CIEXYZ color notation conversion space to the CIELAB color space.As light source, the map table from the RGB color space to CIEXYZ is shown CIEXYZ with standard white light D65 X Y Z = 0.412453 0.357580 0.180423 0.212671 0.715160 0.072169 0.019334 0.119193 0.950227 R G B , Wherein [R, G, B] scope is in [0,1].The CIEXYZ color space is shown L=116f (Y/Y to the map table of CIELAB color space n)-16, a=500[f (X/X n)-f (Y/Y n)], b=200[f (Y/Y n)-f (Z/Z n)],
Wherein,
Figure C20071016449800122
X n, Y n, Z nBe tristimulus values, X n=95.047, Y n=100.000, Z n=108.883.The dynamic range of L is [0,100], and the dynamic range of a is [120,120], and the dynamic range of b is [120,120].
The definition property taken advantage of error a iWith additive errors e iDescribe the difference of color between image, the property taken advantage of error mainly causes by the spectral characteristic of vision system, and additive errors is caused by the drift of color value.To the key point pair set of target image and source images coupling ( (T)P (x 1', y 1'), (S)P (x 1, y 1)), by the absolute difference sum of L, a and each component of b is carried out minimization, calculate ( a i , e i ) = arg min a i , e i &Sigma; ( x 1 , y 1 ) , ( x 1 &prime; , y 1 &prime; ) &Element; &Omega; abs ( I i ( T ) ( x 1 &prime; , y 1 &prime; ) - ( a i &CenterDot; I i ( S ) ( x 1 , y 1 ) + e i ) ) Obtain the property the taken advantage of error a of L, a and each component of b iWith additive errors e i, Ω is the key point pair set of target image and source images coupling here, (S)I i(x 1, y 1) be the color value of i component of source images, (T)I i(x 1', y 1') be the color value of i component of target image, i=1,2,3;
Obtaining the property the taken advantage of error a of L, a and each component of b iWith additive errors e iAfter, L, a and each component of b of each pixel of source images carried out color correction operation respectively, (C)I i(x 1, y 1)=a i (S)I i(x 1, y 1)+e i, wherein, (C)I i(x 1, y 1) be the color value of i the component of correcting image behind the color correction, (S)I i(x 1, y 1) be the color value of i component of source images, i=1,2,3, the correcting image with the CIELAB color space is transformed into the RGB color space then.Conversion from the CIELAB color space to the RGB color space at first needs from the CIELAB color notation conversion space to the CIEXYZ color space, and then from the CIEXYZ color notation conversion space to the RGB color space.Map table from the CIELAB color space to CIEXYZ is shown: at first define f y=(L+16)/116, f x=f y+ a/500, f z=f yIf-b/200 is f y>δ, Y = Y n &CenterDot; f y 3 , Otherwise Y=(f y-16/116) 3 δ 2Y nIf f x>δ, X = X n &CenterDot; f x 3 , Otherwise X=(f x-16/116) 3 δ 2X nIf f z>δ, Z = Z n &CenterDot; f z 3 , Otherwise Z=(f z-16/116) 3 δ 2Z n, δ=6/29 wherein, X n, Y n, Z nBe tristimulus values, X n=95.047, Y n=100.000, Z n=108.883.Map table from the CIEXYZ color space to the RGB color space is shown R G B = 3.240479 - 1.537150 - 0.498535 - 0.969256 1.875992 0.041556 0.055648 - 0.204043 1.057311 X Y Z .
After correcting image was transformed into the RGB color space, the affine transformation of setting up from the pixel of correcting image to the pixel of target image was X &prime; = x 1 &prime; y 1 &prime; = a 11 a 12 a 21 a 22 x 1 y 1 + b 1 b 2 = AX + B , The key point pair set ( (T)P (x 1', y 1'), (S)P (x 1, y 1)) in select the right position of the key point of three pairs of Euclidean distance minimums as initial value, calculate spin matrix A and translation vector B.
At last with spin matrix A and translation vector B pixel (x to correcting image 1, y 1) pass through y 3=a 21x 1+ a 22y 1+ b 2On the y direction, carry out geometric calibration, obtain the pixel (x of image behind the geometric calibration 1, y 3).
After multi-view image above-mentioned color correction of process and the geometric calibration pretreatment operation, again multi-view image is encoded, reduced the predicated error of disparity estimation and compensation, improved coding efficiency.
Below carry out multi-view image color correction and the pretreated subjectivity of geometric calibration and objective performance and coding efficiency with regard to the present invention and compare.
To " flamenco1 " that is provided by KDDI company, " glof2 " two groups of multi-view point video test sets adopt multi-view image preprocess method of the present invention.Fig. 5 a, Fig. 5 b are respectively the target image and the source images of " flamenco1 " many viewpoints test set, and Fig. 5 c, Fig. 5 d are respectively the target image and the source images of " glof2 " many viewpoints test set, and target image and source images size are 320 * 240.As can be seen from the figure, the color appearance of target image and source images is obviously inconsistent, and small offset is arranged in vertical direction, and it is carried out color correction and geometric calibration just seems very necessary.Target image and source images to " flamenco1 " and " glof2 ", adopt the SIFT algorithm to extract the initial characteristic vector of the initial characteristic vector of the target image of " flamenco1 " and " glof2 " and source images shown in Fig. 6 a, Fig. 6 b, Fig. 6 c and Fig. 6 d, owing to the influence of blocking, exposing, between some characteristic vector, there is not matching relationship between viewpoint; The key point that adopts the target image that extracts " flamenco1 " and " glof2 " behind the characteristic matching algorithm of the present invention and source images is shown in Fig. 7 a, Fig. 7 b, Fig. 7 c and Fig. 7 d, the matching relationship of target image key point and source images key point is very clear and definite, substantially eliminated the key point of mistake coupling, the scale size of the size Expressing key point present position of piece among Fig. 7 a, Fig. 7 b, Fig. 7 c and Fig. 7 d.
Adopt the present invention that the source images of " flamenco1 " and " glof2 " is carried out correcting image behind the color correction shown in the right figure of the right figure of Fig. 8 a and Fig. 8 b, from the subjective effect of image as can be seen, compare with the left figure of target image Fig. 8 b of the left figure of target image Fig. 8 a of " flamenco1 " and " glof2 ", its color appearance of image and target image behind employing this method color correction are very approaching, the meadow of the floor of " flamenco1 ", " glof2 " particularly, effect is fairly obvious.
The source images after " flamenco1 " and " glof2 " employing the inventive method is proofreaied and correct and the aberration of target image, with compare without the source images of overcorrect and the aberration of target image, its comparative result schematic diagram is respectively shown in Fig. 9 a and Fig. 9 b, the difference of representing color with the CIEDE2000 aberration, calculate each right aberration of key point respectively to coupling, from Fig. 9 a and Fig. 9 b as can be seen, adopt color calibration method of the present invention, the aberration that key point is right decreases drastically, and illustrates that the similitude that adopts the present invention to proofread and correct post-equalization image and target image is stronger.
Adopt image behind the geometric calibration of the present invention shown in the right figure of the right figure of Figure 10 a and 10b, to " flamenco1 ", the correcting image of " glof2 " carries out geometric calibration, with the baseline (Epipolar Line) of the target image shown in left figure of Figure 10 a and the left figure of 10b be complete level, illustrate that geometric calibration method of the present invention is effective.
Image after color correction and the geometric calibration pretreatment operation is encoded, the coding structure that is adopted as shown in Figure 3, the coding structure that is adopted is only predicted between viewpoint, predict is I-P-P-P, be that first viewpoint adopts I frame coding, other viewpoints are all obtained by previous view prediction, and each is all adopted identical predict constantly.To through pretreated source images with compare without the coding efficiency of pretreated source images, Figure 11 a, 11b and 11c have provided the comparative result of Y component, U component and the V component coding distortion performance curve of " flamenco1 " many viewpoints test set respectively, Figure 12 a, 12b and 12c have provided the comparative result of Y component, U component and the V component coding distortion performance curve of " glof2 " many viewpoints test set respectively, and the coded data form is YUV (4:2:0).For " flamenco1 " many viewpoints test set, the distortion performance of image after the preliminary treatment, the Y component has improved 0.3dB under same code rate, and the U component has improved 0.3dB under same code rate, the V component differs and is not very big, and the decline of 0.1~0.3dB is arranged at high code check end; For " glof2 " many viewpoints test set, the Y component has 0~0.1dB to descend under same code rate, and the U component has improved 0.6~0.7dB under same code rate, and the V component has improved 0.4~0.5dB under same code rate.From overall coding result as can be seen, adopt color correction of the present invention and geometric calibration preliminary treatment after, can improve the coding efficiency of multi-view image greatly, illustrate that color correction of the present invention and geometric calibration preliminary treatment are effective.

Claims (3)

1, a kind of preprocess method of multi-view image is characterized in that it may further comprise the steps:
(1) visual point image in the multi-view image that synchronization is taken by many view camera system is defined as target image, is designated as T, and other visual point image is defined as source images, is designated as S, the target image set of keypoints of from target image, extracting be designated as (T)P}, the source images set of keypoints of from source images, extracting be designated as (S)P}, the plane coordinates of objective definition image are x ' y ' coordinate system, and the plane coordinates of definition source images is the xy coordinate system;
(2) at metric space target image and source images are carried out the extreme value detection by yardstick invariant features mapping algorithm, extraction target image set of keypoints (T)P} and source images set of keypoints (S)The yardstick invariant features transform characteristics vector of each key point among the P};
(3) according to yardstick invariant features transform characteristics vector, obtain the source images key point by the characteristic matching algorithm (S)P (x 1, y 1) optimal candidate coupling crux point on target image (T)P (x 1', y 1'), and on source images, obtain the target image key point (T)P (x 1', y 1') optimal candidate coupling key point (S)P (x 2, y 2), then by two-way verification determine key point to ( (T)P (x 1', y 1'), (S)P (x 1, y 1)) for the coupling key point right;
(4) target image and source image data are converted into the CIEXYZ color space from the RGB color space, be converted into the CIELAB color space from the CIEXYZ color space again, the 1st component of three components of CIELAB color space is that luminance component is designated as L, the 2nd component is that first color component is designated as a and the 3rd component is that second color component is designated as b;
(5) to the key point of target image and all couplings of source images to the key point pair set that constitutes ( (T)P (x 1', y 1'), (S)P (x 1, y 1)), by the absolute difference sum of L, a and each component of b is carried out minimization, calculate ( a i , e i ) = arg min a i , e i &Sigma; ( x 1 , y 1 ) , ( x 1 &prime; , y 1 &prime; ) &Element; &Omega; abs ( ( T ) I i ( x 1 &prime; , y 1 &prime; ) - ( a i &CenterDot; ( S ) I i ( x 1 , y 1 ) + e i ) ) Obtain the property the taken advantage of error a of L, a and each component of b iWith additive errors e i, wherein, Ω is the key point pair set, (S)I i(x 1, y 1) be the color value of i component of source images, (T)I i(x 1', y 1') be the color value of i component of target image, i=1,2,3;
(6) with the property the taken advantage of error a of L, a and each component of b iWith additive errors e iL, a and each component of b to each pixel of source images carry out color correction, (C)I i(x 1, y 1)=a i (S)I i(x 1, y 1)+e i, wherein, (C)I i(x 1, y 1) be the color value of i the component of correcting image behind the color correction, i=1,2,3, and correcting image is transformed into the RGB color space;
(7) affine transformation of setting up from the pixel of correcting image to the pixel of target image is X &prime; = x 1 &prime; y 1 &prime; = a 11 a 12 a 21 a 22 b 1 b 2 = AX + B , The key point pair set ( (T)P (x 1', y 1'), (S)P (x 1, y 1)) in select the right position of the key point of three pairs of Euclidean distance minimums as initial value, calculate spin matrix A and translation vector B;
(8) with spin matrix A and translation vector B pixel (x to correcting image 1, y 1) pass through y 3=a 21x 1+ a 22y 1+ b 2On the y direction, carry out geometric calibration, obtain the pixel (x of image behind the geometric calibration 1, y 3).
2, the preprocess method of a kind of multi-view image as claimed in claim 1, it is characterized in that the extraction of described yardstick invariant features transform characteristics vector, comprise that the extraction of source images characteristic vector may further comprise the steps to the extraction of target image and source images characteristic vector:
A. pass through the conversion of yardstick invariant features at the pixel of metric space, by double gauss difference operator D (x to source images 1, y 1, σ)=(G (x 1, y 1, k σ)-G (x 1, y 1σ)) * I (x, y), calculate the response of the double gauss difference operator of pixel under the metric space different scale of source images, obtain the characteristic dimension geometric locus according to response again, on the characteristic dimension geometric locus, detect all Local Extremum, determine the scale size of the position and the present position of the preliminary key point of source images then according to Local Extremum, wherein, σ is the metric space factor, two-dimensional Gaussian function G ( x 1 , y 1 , &sigma; ) = 1 2 &pi; &sigma; 2 e - ( x 1 2 + y 1 2 ) / 2 &sigma; 2 , K is the product factor, x 1=(x 1, y 1, σ) TFor the extreme point position is designated as
B. by the three-dimensional quadratic function of match, determine the scale size of the position and the present position of the final key point of source images, and whether the principal curvatures of judging this key point is less than the principal curvatures threshold value of setting, if, determine that then this key point is unsettled edge response point, and remove this unsettled edge response point; Otherwise, with the extreme point position of this key point Substitution metric space function D ( x ^ 1 ) = D + 1 2 &PartialD; D T &PartialD; x 1 x ^ 1 Obtain the metric space functional value
Figure C2007101644980003C5
Judge Whether less than the metric space threshold value of setting, if determine that then this key point is the low contrast key point, and remove this low contrast key point;
C. according to the metric space L (x of source images key point 1, y 1, σ)=G (x 1, y 1, σ) * I (x 1, y 1), by the directioin parameter of each key point after definite unsettled edge response point of removal of the direction distribution characteristics of source images key point neighborhood territory pixel and the low contrast key point, the source images directioin parameter comprises the big or small m (x of gradient 1, y 1) and direction θ (x 1, y 1),
m ( x 1 , y 1 ) = ( L ( x 1 + 1 , y 1 ) - L ( x 1 - 1 , y 1 ) ) 2 + ( L ( x 1 , y 1 + 1 ) - L ( x 1 , y 1 - 1 ) ) 2 ,
θ(x 1,y 1)=tan -1((L(x 1,y 1-1)-L(x 1,y 1-1))/(L(x 1+1,y 1)-L(x 1-1,y 1)));
To the extraction of target image characteristic vector, in the extraction identical operations of x ' y ' plane coordinate system employing with the source images characteristic vector.
3, the preprocess method of a kind of multi-view image as claimed in claim 1 is characterized in that described characteristic matching algorithm is: the key point pair set of target image and source images coupling ( (T)P (x 1', y 1'), (S)P (x 1, y 1)) in, for the source images key point (S)P (x 1, y 1), at first in maximum horizontal and vertical parallax hunting zone, the most similar key point on the ferret out image (T)P (x 1', y 1') with time similar key point (T)P (x 2', y 2'), calculating then with source images and target image key point is the interior Euclidean distance of N * N window at center, if Euclidean distance satisfies &Sigma; N E ( ( S ) P ( x 1 , y 1 ) - &mu; 1 , ( T ) P ( x 1 &prime; , y 1 &prime; ) - &mu; 1 &prime; ) 2 &Sigma; N E ( ( S ) P ( x 1 , y 1 ) - &mu; 1 , ( T ) P ( x 2 &prime; , y 2 &prime; ) - &mu; 2 &prime; ) 2 < &tau; 2 , Wherein, μ 1Be the source images key point (S)P (x 1, y 1) average in N * N window, μ 1' be the target image key point (T)P (x 1', y 1') average in N * N window, μ 2' be the target image key point (T)P (x 2', y 2') average in N * N window, N is the size of window, τ is a preset threshold, then thinks the most similar key point (T)P (x 1', y 1') be the source images key point (S)P (x 1, y 1) optimal candidate coupling on target image; For the target image key point (T)P (x 1', y 1') obtain optimal candidate coupling on source images by compute euclidian distances (S)P (x 2, y 2); Parallax from the source images to the target image is designated as d Sou → tar, d Sou → tar=(x 1'-x 1, y 1'-y 1), the parallax from the target image to the source images is designated as d Tar → sou, d Tar → sou=(x 2-x 1', y 2-y 1'), to d Sou → tarAnd d Tar → souCarry out two-way verification, if | d Sou → tar+ d Tar → sou|<2, then determine key point to ( (T)P (x 1', y 1'), (S)P (x 1, y 1)) for the coupling key point right.
CNB2007101644985A 2007-12-05 2007-12-05 A kind of preprocess method of multi-view image Expired - Fee Related CN100556153C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2007101644985A CN100556153C (en) 2007-12-05 2007-12-05 A kind of preprocess method of multi-view image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2007101644985A CN100556153C (en) 2007-12-05 2007-12-05 A kind of preprocess method of multi-view image

Publications (2)

Publication Number Publication Date
CN101179745A CN101179745A (en) 2008-05-14
CN100556153C true CN100556153C (en) 2009-10-28

Family

ID=39405799

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2007101644985A Expired - Fee Related CN100556153C (en) 2007-12-05 2007-12-05 A kind of preprocess method of multi-view image

Country Status (1)

Country Link
CN (1) CN100556153C (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101763632B (en) * 2008-12-26 2012-08-08 华为技术有限公司 Method for demarcating camera and device thereof
CN101790103B (en) * 2009-01-22 2012-05-30 华为技术有限公司 Parallax calculation method and device
CN101556700B (en) * 2009-05-15 2012-02-15 宁波大学 Method for drawing virtual view image
CN102917175A (en) * 2012-09-13 2013-02-06 西北工业大学 Sheltering multi-target automatic image matting method based on camera array synthetic aperture imaging
CN104574331B (en) * 2013-10-22 2019-03-08 中兴通讯股份有限公司 A kind of data processing method, device, computer storage medium and user terminal
CN104463895B (en) * 2014-12-26 2017-10-24 青岛博恒康信息技术有限公司 A kind of earth's surface monitoring image processing method based on SAR
CN104933706B (en) * 2015-05-29 2017-12-01 西安电子科技大学 A kind of imaging system color information scaling method
CN105444888A (en) * 2015-11-16 2016-03-30 青岛市光电工程技术研究院 Chromatic aberration compensation method of hyperspectral imaging system
CN105352455B (en) * 2015-11-18 2017-09-05 宁波大学 A kind of plane inclination measuring method based on image blur
CN106441415B (en) * 2016-09-22 2019-07-09 江铃汽车股份有限公司 A kind of check method of vehicular meter parallax
TWI672677B (en) * 2017-03-31 2019-09-21 鈺立微電子股份有限公司 Depth map generation device for merging multiple depth maps
CN108152278B (en) * 2017-11-22 2020-07-24 沈阳普泽众康医药科技有限公司 Urine detection method and device
CN108305281B (en) * 2018-02-09 2020-08-11 深圳市商汤科技有限公司 Image calibration method, device, storage medium, program product and electronic equipment
CN109255760A (en) * 2018-08-13 2019-01-22 青岛海信医疗设备股份有限公司 Distorted image correction method and device
CN110189687B (en) * 2019-06-04 2020-10-09 深圳市摩西尔电子有限公司 Method and device for carrying out image transformation on LED module image
CN111027040B (en) * 2019-11-21 2022-11-08 中国农业银行股份有限公司 Password setting method, password verification method and devices corresponding to methods
CN113111880B (en) * 2021-05-12 2023-10-17 中国平安人寿保险股份有限公司 Certificate image correction method, device, electronic equipment and storage medium
CN114972125B (en) * 2022-07-29 2022-12-06 中国科学院国家天文台 True color image recovery method and device for deep space detection multispectral image

Non-Patent Citations (11)

* Cited by examiner, † Cited by third party
Title
Experimental System of Free Viewpoint Television. Purim Na Bangchang,and etc.Proc.SPIE,2003. 2003
Experimental System of Free Viewpoint Television. Purim Na Bangchang,and etc.Proc.SPIE,2003. 2003 *
Improving the Prediction Efficiency for Multi-View VideoCoding Using Histogram Matching. U.Fecker,and etc.Picture Coding Symposium,2006. 2006
Improving the Prediction Efficiency for Multi-View VideoCoding Using Histogram Matching. U.Fecker,and etc.Picture Coding Symposium,2006. 2006 *
一种基于区域分割与跟踪的多视点视频校正算法. 邵枫等.光子学报,第36卷第8期. 2007
一种多视点视频自动颜色校正***. 邵枫等.光学学报,第27卷第5期. 2007
一种多视点视频自动颜色校正***. 邵枫等.光学学报,第27卷第5期. 2007 *
一种新颖的多视点图像规正算法. 孙炯等.计算机工程与应用,第17期. 2007
一种新颖的多视点图像规正算法. 孙炯等.计算机工程与应用,第17期. 2007 *
基于规正参数调节的多视点图像规正算法. 邵枫等.光电工程,第34卷第8期. 2007
基于规正参数调节的多视点图像规正算法. 邵枫等.光电工程,第34卷第8期. 2007 *

Also Published As

Publication number Publication date
CN101179745A (en) 2008-05-14

Similar Documents

Publication Publication Date Title
CN100556153C (en) A kind of preprocess method of multi-view image
WO2016086754A1 (en) Large-scale scene video image stitching method
Guttmann et al. Semi-automatic stereo extraction from video footage
CN103250184B (en) Based on the estimation of Depth of global motion
CN100542303C (en) A kind of method for correcting multi-viewpoint vedio color
WO2018119808A1 (en) Stereo video generation method based on 3d convolutional neural network
US20110080466A1 (en) Automated processing of aligned and non-aligned images for creating two-view and multi-view stereoscopic 3d images
CN103337094A (en) Method for realizing three-dimensional reconstruction of movement by using binocular camera
KR102224716B1 (en) Method and apparatus for calibrating stereo source images
CN102857739A (en) Distributed panorama monitoring system and method thereof
CN108460792B (en) Efficient focusing stereo matching method based on image segmentation
CN105654493B (en) A kind of affine constant binocular solid Matching power flow of improved optics and parallax optimization method
CN105894443A (en) Method for splicing videos in real time based on SURF (Speeded UP Robust Features) algorithm
Zilly et al. Real-time generation of multi-view video plus depth content using mixed narrow and wide baseline
CN102223545B (en) Rapid multi-view video color correction method
CN105791795A (en) Three-dimensional image processing method and device and three-dimensional video display device
CN107330856B (en) Panoramic imaging method based on projective transformation and thin plate spline
CN114187208A (en) Semi-global stereo matching method based on fusion cost and adaptive penalty term coefficient
Knorr et al. A modular scheme for artifact detection in stereoscopic omni-directional images
Li et al. A novel method for 2D-to-3D video conversion using bi-directional motion estimation
Williem et al. Depth map estimation and colorization of anaglyph images using local color prior and reverse intensity distribution
GB2585197A (en) Method and system for obtaining depth data
Shao et al. A content-adaptive multi-view video color correction algorithm
CN114608558A (en) SLAM method, system, device and storage medium based on feature matching network
CN104994365B (en) A kind of method and 2D video three-dimensional methods for obtaining non-key frame depth image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: SHANGHAI SILICON INTELLECTUAL PROPERTY EXCHANGE CE

Free format text: FORMER OWNER: NINGBO UNIVERSITY

Effective date: 20120105

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 315211 NINGBO, ZHEJIANG PROVINCE TO: 200030 XUHUI, SHANGHAI

TR01 Transfer of patent right

Effective date of registration: 20120105

Address after: 200030 Shanghai City No. 333 Yishan Road Huixin International Building 1 building 1704

Patentee after: Shanghai Silicon Intellectual Property Exchange Co.,Ltd.

Address before: 315211 Zhejiang Province, Ningbo Jiangbei District Fenghua Road No. 818

Patentee before: Ningbo University

ASS Succession or assignment of patent right

Owner name: SHANGHAI SIPAI KESI TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: SHANGHAI SILICON INTELLECTUAL PROPERTY EXCHANGE CENTER CO., LTD.

Effective date: 20120217

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 200030 XUHUI, SHANGHAI TO: 201203 PUDONG NEW AREA, SHANGHAI

TR01 Transfer of patent right

Effective date of registration: 20120217

Address after: 201203 Shanghai Chunxiao Road No. 350 South Building Room 207

Patentee after: Shanghai spparks Technology Co.,Ltd.

Address before: 200030 Shanghai City No. 333 Yishan Road Huixin International Building 1 building 1704

Patentee before: Shanghai Silicon Intellectual Property Exchange Co.,Ltd.

ASS Succession or assignment of patent right

Owner name: SHANGHAI GUIZHI INTELLECTUAL PROPERTY SERVICE CO.,

Free format text: FORMER OWNER: SHANGHAI SIPAI KESI TECHNOLOGY CO., LTD.

Effective date: 20120606

C41 Transfer of patent application or patent right or utility model
C56 Change in the name or address of the patentee
CP02 Change in the address of a patent holder

Address after: 200030 Shanghai City No. 333 Yishan Road Huixin International Building 1 building 1706

Patentee after: Shanghai spparks Technology Co.,Ltd.

Address before: 201203 Shanghai Chunxiao Road No. 350 South Building Room 207

Patentee before: Shanghai spparks Technology Co.,Ltd.

TR01 Transfer of patent right

Effective date of registration: 20120606

Address after: 200030 Shanghai City No. 333 Yishan Road Huixin International Building 1 building 1704

Patentee after: Shanghai Guizhi Intellectual Property Service Co.,Ltd.

Address before: 200030 Shanghai City No. 333 Yishan Road Huixin International Building 1 building 1706

Patentee before: Shanghai spparks Technology Co.,Ltd.

DD01 Delivery of document by public notice

Addressee: Shi Lingling

Document name: Notification of Passing Examination on Formalities

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20091028

Termination date: 20161205

CF01 Termination of patent right due to non-payment of annual fee