CN101820550B - Multi-viewpoint video image correction method, device and system - Google Patents

Multi-viewpoint video image correction method, device and system Download PDF

Info

Publication number
CN101820550B
CN101820550B CN2009101186295A CN200910118629A CN101820550B CN 101820550 B CN101820550 B CN 101820550B CN 2009101186295 A CN2009101186295 A CN 2009101186295A CN 200910118629 A CN200910118629 A CN 200910118629A CN 101820550 B CN101820550 B CN 101820550B
Authority
CN
China
Prior art keywords
video image
matrix
color
color space
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2009101186295A
Other languages
Chinese (zh)
Other versions
CN101820550A (en
Inventor
李凯
刘源
王静
苏红宏
赵嵩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Global Innovation Polymerization LLC
Tanous Co
Original Assignee
Huawei Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Device Co Ltd filed Critical Huawei Device Co Ltd
Priority to CN2009101186295A priority Critical patent/CN101820550B/en
Priority to EP09836013A priority patent/EP2385705A4/en
Priority to PCT/CN2009/075383 priority patent/WO2010075726A1/en
Publication of CN101820550A publication Critical patent/CN101820550A/en
Priority to US13/172,193 priority patent/US8717405B2/en
Application granted granted Critical
Publication of CN101820550B publication Critical patent/CN101820550B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses multi-viewpoint video image correction method, device and system. The method comprises the following steps of: acquiring at least two video images, wherein an overlapping region is arranged between every two adjacent video images in the at least two video images; selecting pairing characteristic points of the every two adjacent video images from the overlapping region; generating a color correction matrix of the every two adjacent video images according to the pairing characteristic points; and correcting the received video images to be corrected through the color correction matrix. When the method is applied to correcting the colors of the video images, because the overlapping region only needs to be arranged between every two images when the color correction matrix is calculated, no matter whether the overlapping region exists or not in the correction process, the color correction can be carried out through the color correction matrix, and the color correction matrix only needs to be generated once so that the time of the color correction on the video images is saved.

Description

Multi-viewpoint video image correction method, Apparatus and system
Technical field
The present invention relates to communication technical field, particularly multi-viewpoint video image correction method, Apparatus and system.
Background technology
Multi-viewpoint video image is often referred to the video image that is photographed synchronously by one group of video camera that is in diverse location, but owing to camera performance parameter difference, scene light difference etc., cause existing between the video image deviation of brightness and colourity (being called for short the light tone degree), images match, the synthetic effect of image compression image that this deviation effects is follow-up.
The available technology adopting histogram is proofreaied and correct the light tone degree between the video image, histogram is the curve chart that is used for presentation video light tone degree, but this correcting mode requires to have bigger similitude between the video image, correction with two video images is an example, suppose and to proofread and correct the light tone degree of target image according to the light tone degree of source images, then at first the source images of two shot by camera and target image must have overlapping region more than 80 percent, and the histogram that generates according to target image could mate with the histogram that generates according to source images preferably like this.
The inventor finds in the research process to prior art, owing to carrying out timing, the light tone degree that adopts histogram to video image requires to have bigger similitude between the video image, therefore in the scene of many viewpoints photographic images, when not having the overlapping region between the less or video image when the overlapping region between the video image, because video image exists than big-difference, it is relatively poor to cause proofreading and correct the result easily, perhaps proofreaies and correct failure; And need carry out the real-time statistics correction to each image owing to adopt histogram to carry out timing, correction time is longer.
Summary of the invention
The purpose of the embodiment of the invention is to provide multi-viewpoint video image correction method, Apparatus and system, the color correction problem when not having bigger overlapping region to solve between the multi-viewpoint video image.
For realizing the purpose of the embodiment of the invention, the embodiment of the invention provides following technical scheme:
A kind of multi-viewpoint video image correction method comprises:
Obtain at least two video images, adjacent in twos video image has the overlapping region in described at least two video images;
Select the pairing characteristic point of described adjacent in twos video image from described overlapping region;
Generate the color correction matrix of described adjacent in twos video image according to described pairing characteristic point;
By described color correction matrix the video image to be corrected that receives is proofreaied and correct.
A kind of multi-viewpoint video image means for correcting comprises:
Acquiring unit is used to obtain at least two video images, and adjacent in twos video image has the overlapping region in described at least two video images;
Selected cell is used for selecting from described overlapping region the pairing characteristic point of described adjacent in twos video image;
Generation unit is used for generating according to described pairing characteristic point the color correction matrix of described adjacent in twos video image;
Correcting unit is used for by described color correction matrix the video image to be corrected that receives being proofreaied and correct.
A kind of multi-viewpoint video image corrective system comprises: image correction apparatus and at least two image-input devices,
Described at least two image-input devices are used for to two video images of described image correction apparatus input at least, and adjacent in twos video image has the overlapping region in described at least two video images;
Described image correction apparatus, be used for selecting the pairing characteristic point of described adjacent in twos video image from described overlapping region, generate the color correction matrix of described adjacent in twos video image according to described pairing characteristic point, the video image to be corrected of described image-input device transmission is proofreaied and correct by described color correction matrix.
The technical scheme that is provided by the above embodiment of the invention as seen, the embodiment of the invention is selected the pairing characteristic point of adjacent in twos video image by the overlapping region of adjacent video image in twos, and according to the pairing characteristic point calculate the color correction matrix, it is overlapping that no matter to be corrected whether follow-up video image and target video image have, and can carry out color correction to video image to be corrected by this color correction matrix.Use the embodiment of the invention video image is carried out color correction, owing to only when calculating the color correction matrix, need to have between the image in twos the overlapping region, in trimming process, no matter between the video image whether the overlapping region is arranged, all can carry out color correction by this color correction matrix, and the color correction matrix only need generate once, has saved the time of video image being carried out color correction.
Description of drawings
In order to be illustrated more clearly in the technical scheme of the embodiment of the invention, to do to introduce simply to the accompanying drawing of required use among the embodiment below, apparently, accompanying drawing in describing below only is some embodiments of the present invention, for those of ordinary skills, under the prerequisite of not paying creative work, can also obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is the first embodiment flow chart of multi-viewpoint video image correction method of the present invention;
Fig. 2 is the second embodiment flow chart of multi-viewpoint video image correction method of the present invention;
Fig. 3 is the 3rd embodiment flow chart of multi-viewpoint video image correction method of the present invention;
Fig. 4 A is a kind of application architecture of video image correction method embodiment of the present invention;
Fig. 4 B is the another kind of application architecture of video image correction method embodiment of the present invention;
Fig. 5 is the first embodiment block diagram of multi-viewpoint video image means for correcting of the present invention;
Fig. 6 A is the second embodiment block diagram of multi-viewpoint video image means for correcting of the present invention;
Fig. 6 B is the embodiment block diagram of selected cell in the multi-viewpoint video image means for correcting of the present invention;
Fig. 6 C is the embodiment block diagram of generation unit in the multi-viewpoint video image means for correcting of the present invention;
Fig. 6 D is an embodiment block diagram of correcting unit in the multi-viewpoint video image means for correcting of the present invention;
Fig. 6 E is another embodiment block diagram of generation unit in the multi-viewpoint video image means for correcting of the present invention;
Fig. 7 is the embodiment block diagram of multi-viewpoint video image corrective system of the present invention.
Embodiment
The embodiment of the invention provides multi-viewpoint video image correction method, Apparatus and system, in order to make those skilled in the art person understand the present invention program better, and above-mentioned purpose of the present invention, feature and advantage can be become apparent more, the present invention is further detailed explanation below in conjunction with the drawings and specific embodiments.
The first embodiment flow process of multi-viewpoint video image correction method of the present invention is as shown in Figure 1:
Step 101: obtain at least two video images, adjacent in twos video image has the overlapping region at least two video images.
Step 102: the pairing characteristic point of selecting adjacent in twos video image from the overlapping region.
In the process of implementation step 102, can adopt several different methods to obtain the pairing characteristic point of adjacent in twos video image, as: Harris feature point detecting method, SUSAN (SmallestUnivalue Segment Assimilating Nucleus) feature point detecting method, feature point detecting method, SIFT (Scale-invariant feature transform based on small echo, the conversion of yardstick invariant features) feature point detecting method etc., to this, embodiments of the invention do not impose any restrictions this.
Step 103: the color correction matrix that generates adjacent in twos video image according to the pairing characteristic point.
Step 104: the video image to be corrected that receives is proofreaied and correct by the color correction matrix.
The embodiment of the invention is carried out color correction to video image, owing to only when calculating the color correction matrix, need to have between the image in twos the overlapping region, in trimming process, no matter between the video image whether the overlapping region is arranged, all can carry out color correction by this color correction matrix, and the color correction matrix only need generate once, has saved the time of video image being carried out color correction.
The second embodiment flow process of multi-viewpoint video image correction method of the present invention as shown in Figure 2, this embodiment shows the process that the video image of two shot by camera is proofreaied and correct, wherein two video images of two video camera shootings need the overlapping region when initial calculation color correction matrix, follow-uply non-conterminous two video images are carried out timing by the color correction matrix, the image of two shot by camera can not have the overlapping region, but, need to have the overlapping region between captured two adjacent video images:
Step 201: receive the video image that two video cameras are taken with overlapping region.
Suppose that two video cameras are respectively source video camera and target video camera, what wherein the source video camera was taken is the source video image, and what the target video camera was taken is target video image, need be the solid colour with target image with the color correction of source video image.
Initially can adjust the position of two video cameras, make captured source video image and target video image have the overlapping region.Compare with the existing overlapping region of adopting histogram to proofread and correct to need on a large scale, the magnitude range of this overlapping region is not limited, as long as guarantee to have the overlapping region.
Step 202: two captured video images are carried out the image preliminary treatment.
The preliminary treatment of video image is comprised that the smoothing and noise-reducing process of common employing and distortion correction handle, and this step is an optional step.
Step 203: pretreated two video images are carried out color space conversion.
Can carry out color space conversion to the video image of shot by camera, the form of the video image before and after the conversion can be RGB (Red Green Blue, or any one form among HSV or YUV or HSL or CIE-Lab or CIE-Luv or CMY or CMYK or the XYZ red green blue tricolor).
Step 204: select the pairing characteristic point from the overlapping region of two video images.
Be appreciated that for those skilled in the art, the mode of obtaining characteristic point from two video images has multiple, for example: the Harris feature point detecting method, SUSAN (Smallest UnivalueSegment Assimilating Nucleus) feature point detecting method, feature point detecting method based on small echo, SIFT (Scale-invariant feature transform, the conversion of yardstick invariant features) feature point detecting method etc., in order to reach better effect, what embodiments of the invention adopted is to rotation, the yardstick convergent-divergent, the brightness variation maintains the invariance, and to visual angle change, affine transformation, noise also keeps to a certain degree SIFT (the Scale-invariant feature transform of stability, the conversion of yardstick invariant features) algorithm, such just as skilled in the art will understand, can also adopt the above and other feature extraction algorithm to obtain the pairing characteristic point of the overlapping region of two video images, to this, embodiments of the invention do not impose any restrictions.
When selecting the pairing characteristic point of overlapping region, can adopt following four kinds of modes:
Mode one: the SIFT feature point detection is carried out in the overlapping region, detected characteristic point is mated, the many assembly that obtain the two adjacent video image are to characteristic point.
Wherein, SIFT feature point detection mode is a most common form during conventional images is handled, and it is affine constant with the light tone degree to adopt the SIFT feature point detection to satisfy.It is pointed out that the feature point detection except SIFT, also have various features point detection mode in the prior art, for example Harris detection mode, Susan angle detection mode and improve algorithm etc. are as long as can detect characteristic point from the overlapping region.
For detected characteristic point, can utilize RANSAC (RANdom SAmpleConsensus, stochastical sampling consistency) method to remove unmatched characteristic point, thereby obtain reliable and stable pairing characteristic point.When removing unmatched characteristic point, can adopt prior art (for example: based on probabilistic statistical method etc.), this embodiment of the invention is not limited.
Mode two: the SIFT feature point detection is carried out in the overlapping region, detected characteristic point is mated, the many assembly that obtain the two adjacent video image are to characteristic point, with the pairing characteristic point is that the identical zone of scope is divided at the center, gives the pairing characteristic point with the mean value assignment of the color characteristic in the zone divided.
Wherein, detected characteristics point does not repeat them here with to remove unmatched characteristic point consistent with the description in the mode one.
For each assembly to characteristic point, can be respectively that the pairing region of the identical zone of area as two video images divided at the center with the characteristic point, with the mean value of this each Color Channel of zone color value as characteristic point, for example (H-Hue represents colourity for HSL, S-Saturation represents saturation, L-Lightness represents brightness) video image of form, each match point all has corresponding H value, S value and L value, and the zone that corresponding pairing region is made up of some points, with the H value of being had a few in this zone, S value and L value are averaged respectively, suppose that mean value is H ', S ' and L ' are then with H ', S ' and L ' assignment are given this characteristic point.
Mode three: described overlapping region is cut apart, and the corresponding region of two video images after described cutting apart is as the pairing characteristic point, gives described pairing characteristic point with the mean value assignment of the color characteristic in the zone of described correspondence.
When the overlapping region is cut apart, two video images can be partitioned into some assembly to the zone, each assembly can be different to the scope in zone, all comprise the certain characteristics point in each zone respectively, each field color passage is averaged as the value of characteristic point, example class in process of averaging and the mode two does not seemingly repeat them here.After the pairing region is calculated the mean value of Color Channel respectively, obtain some pairing characteristic points.
Mode four: receive the region unit of choosing from described overlapping region by manually, the corresponding region piece of described two video images choosing is as the pairing characteristic point, gives described pairing characteristic point with the mean value assignment of the color characteristic of the region unit of described correspondence.
Mode four is with the different of mode three, mode three is when cutting apart the overlapping region, can finish automatically according to setting in advance by image correction apparatus, and the mode four-way is crossed manual mode and is chosen some pairing regions from the overlapping region, the result that will choose is input in the image correction apparatus then, carries out subsequent treatment.
Step 205: the color space matrix of setting up two video images according to the pairing characteristic point.
Suppose to be the HSL form behind two video image converting colors spaces, and selected the individual match point of m (m is the natural number greater than 1), then the color space matrix of the source video image of m match point correspondence and target video image is as follows respectively:
Mat _ dat = h 11 s 12 l 13 . . . . . . . . . h m 1 s m 2 l m 3 dst , Mat _ src = h 11 s 12 l 13 . . . . . . . . . h m 1 s m 2 l m 3 src
Wherein, Mat_dst is the color space matrix of target video image, and Mat_src is the color space matrix of source video image, with the implication of the example explanation of first behavior among Mat_dst matrix, this first line display m first point in putting, wherein h 11Be the intensity value of this first point, s 12Be the chromatic value of this first point, l 13Be the brightness of this first point, so Mat_dst is H, the S of m target pixel points of target video image in m the match point, the matrix of L value.
Step 206: set up the transformation relation of two color space matrixes, and obtain the color correction matrix according to transformation relation.
Suppose that color correction matrix to be asked is Mat_ColorRectify, the transformation relation formula of setting up color correction is as follows:
Mat_dst=Mat_ColorRectify*Mat_src+error
Wherein, error represents the error between the color space matrix, finds the solution error based on above-mentioned transformation relation formula, and the formula of finding the solution error is as follows:
Σ i = 1 m ( Mat _ dst - Mat _ ColorRectify * Mat _ src ) 2
Error value in above-mentioned formula Mat_ColorRectify hour is the color correction matrix of being found the solution.
Step 207: correction matrix saves colors.
Step 208: the video image to be corrected that receives is carried out color correction with the color correction matrix.
The how conversion of the position of follow-up no matter source video camera and target video camera, whether captured video image has common factor, all can use the color correction matrix that solves and carry out color correction, and process is as follows:
After the source video camera is imported video image to be corrected, generate the color space matrix of video image to be corrected, suppose that video image to be corrected is the video image of HSL form through color space conversion, this video image is made up of Y pixel, then generating the color space matrix is the matrix of (Y * 3), H value, S value, the L value of a pixel of each line display of matrix.
Color space matrix multiple with color correction matrix M at_ColorRectify and video image to be corrected, with the color space matrix of described multiplied result, generate video image after the described correction according to the color space matrix of the video image after the described correction as the video image after proofreading and correct.
The invention described above embodiment carries out color correction to video image, owing to only when calculating the color correction matrix, need to have between the image in twos the overlapping region, in trimming process, no matter between the video image whether the overlapping region is arranged, all can carry out color correction by this color correction matrix, and the color correction matrix only need generate once, has saved the time of video image being carried out color correction.
The 3rd embodiment flow process of multi-viewpoint video image correction method of the present invention as shown in Figure 3, this embodiment shows the process that the video image of N platform shot by camera is proofreaied and correct, wherein when initial calculation color correction matrix, the video image of adjacent in twos shot by camera needs the overlapping region in the N platform video camera, import the adjacent in twos video image of N-1 group altogether, N-1 color correction matrix of corresponding generation, by N-1 color correction matrix video image is carried out timing, the image of N platform shot by camera can not have common overlapping region:
Step 301: receive N the video image that N platform video camera is taken, adjacent in twos video image has the overlapping region in N the video image.
Adjacent in twos video camera is one group in the N platform video camera, and for example video camera 1 and video camera 2 are one group, and video camera 2 and video camera 3 are one group, and video camera N-1 and video camera N are one group by that analogy.Two video cameras in each group are respectively source video camera and target video camera, and what wherein the source video camera was taken is the source video image, and what the target video camera was taken is target video image.
Initially can adjust the position of N platform video camera, make the source video image and the target video image of shot by camera in twos of each group have the overlapping region.Adopt histogram to proofread and correct to compare with existing, the magnitude range of this overlapping region is not limited.
Step 302: a preliminary treatment N video image.
The preliminary treatment of video image is comprised that the smoothing and noise-reducing process of common employing and distortion correction handle, and this step is an optional step, and its processing procedure is a prior art, does not repeat them here.
Step 303: a pretreated N video image is carried out color space conversion.
The video image of supposing shot by camera is the image of rgb format, the image of rgb format can be carried out color space conversion, the image after the conversion can be any one form among HSV or YUV or HSL or CIE-Lab or CIE-Luv or CMY or CMYK or the XYZ.
Step 304: the pairing characteristic point of the video image that selective sequential N-1 group is adjacent in twos.
In the process of implementation step 304, obtain the process of the matching characteristic point of the adjacent in twos video image of N-1 group, be similar to the step 204 of an embodiment, do not repeat them here.
Step 305: the color space matrix of setting up adjacent in twos video image according to the pairing characteristic point.
Be similar to the step 205 of an embodiment, do not repeat them here.
Step 306: set up the transformation relation of two color space matrixes, and obtain the color correction matrix according to transformation relation.
Step 307: preserve current color correction matrix.
Above-mentioned steps 304 to step 307 for handling the process of one group of video image in the adjacent in twos video image of N-1 group, this process is consistent to the description of step 207 with step 204 among aforementioned second embodiment, does not repeat them here.
Step 308: judge whether to handle N-1 group video image, if then execution in step 309; Otherwise, return step 304.
Step 309: receive the video image to be corrected that sends by K platform video camera.
According to abovementioned steps, obtained N-1 color correction matrix, wherein suppose corresponding the 1st the color correction matrix M at_1 of the 1st group of video image, the 2nd group of corresponding the 2nd color correction matrix M at_2 of video image, corresponding N-1 the color correction matrix M at_N-1 of N-1 group video image by that analogy.
Step 310: preceding K-1 color correction matrix will preserving carries out the color correction matrix that matrixing obtains K video image.
In proper order K-1 color correction matrix multiple of first color correction matrix to the obtained the color correction matrix M at_ (k) of the video image of K video camera input, transformation for mula is as follows: Mat_ (k)=Mat_1 * Mat_2... * Mat_ (k-1).
Step 311: the video image of K platform video camera being taken with the color correction matrix of K video image carries out color correction.
With the color space matrix multiple of above-mentioned color correction matrix M at_ (k) with video to be corrected, with the color space matrix of multiplied result, according to the video image after the color space matrix generation correction of the video image after proofreading and correct as the video image after proofreading and correct.
The invention described above embodiment carries out color correction to video image, owing to only when calculating the color correction matrix, need to have between the image in twos the overlapping region, in trimming process, no matter between the video image whether the overlapping region is arranged, all can carry out color correction by this color correction matrix, and the color correction matrix only need generate once, has saved the time of video image being carried out color correction.
In conjunction with said method embodiment, the present invention can be applied in to have in the conference scenario that multiple cameras takes, be respectively two kinds of common conference scenario as Fig. 4 A and 4B, wherein, all comprise video image input unit, video image means for correcting, video image receiving system, video image decoding device and video image output device among Fig. 4 A and Fig. 4 B.
The video image input unit is made up of N platform video camera, be used for the capture video image, after the video image of shooting is input to the video image means for correcting, can proofread and correct video image according to the method shown in the aforementioned embodiment of the invention, the video image means for correcting can or comprise the PC of video image correcting chip for the webserver, an independent video image Correction Processor chip.Video image after the correction passes through cable or network or satellite transmits to the video image receiving system, video image after the video image receiving system will be proofreaied and correct sends to the video image decoding device and decodes, decoded video image is exported demonstration by the video image output device, and this video image output device can be one group of regular display or one group of three-dimensional monitor.
Fig. 4 A is with the different of Fig. 4 B, in the two dimension or three-dimensional many viewpoints net meeting system that Fig. 4 A can be applied to be made of two above video cameras, and Fig. 4 B can be applied in the two dimension or three-dimensional many viewpoints net meeting system that the video camera array of the complexity of being arranged by various video cameras constitutes.
In above-mentioned conference scenario, because the video camera interbody spacer is bigger, the overlapping region is very little between the captured image of multiple-camera, even does not have the overlapping region, like this, causes traditional histogram color correction method to lose efficacy.And the embodiment of the invention only needs to need to have lap between two video images when generating the color correction matrix, and subsequent applications color correction matrix just can be less to using the overlapping region, even do not have the video image of overlapping region to carry out color correction.Detailed process is as follows: at first, move a plurality of video cameras, making has enough overlapping regions in twos between video camera, and the video image of such shooting is carried out color correction in twos, obtains the color correction matrix between video camera in twos; Secondly, the camera position of the previous shift position of a plurality of video cameras is resetted, promptly zero lap zone or overlapping region are very little between video camera in twos; Once more, utilize a plurality of color correction matrixing relations of obtaining like this, carry out the color correction of captured video image between a plurality of video cameras.
Corresponding with the embodiment of multi-viewpoint video image color calibration method of the present invention, the present invention also provides the embodiment of multi-viewpoint video image color correction device and system.
The first embodiment block diagram of multi-viewpoint video image color correction device of the present invention as shown in Figure 5, this device comprises: acquiring unit 510, selected cell 520, generation unit 530 and correcting unit 540.
Wherein, acquiring unit 510 is used to obtain at least two video images, adjacent in twos video image has the overlapping region in described at least two video images, and described acquiring unit 510 specifically can be in the following way: first-class other the picture pick-up device of video camera, shooting is realized; Selected cell 520 is used for selecting from described overlapping region the pairing characteristic point of described adjacent in twos video image, selected cell 520 can be realized in the following way: integrated image characteristic point extracts with the processor special chip of coupling or adopts general processor chips, and combining image feature extraction and Matching Algorithm realize; Generation unit 530 is used for generating according to described pairing characteristic point the color correction matrix of described adjacent in twos video image, described generation unit 530 can adopt: the CPLD of integrated matrix processing capacity (Complex Programmable LogicDevice, CPLD) finishes, or utilize FPGA (Field ProgrammableGate Array, field programmable gate array) to realize; Correcting unit 540 is used for by described color correction matrix the video image to be corrected that receives being proofreaied and correct.
The second embodiment block diagram of multi-viewpoint video image color correction device of the present invention as shown in Figure 6A, this device comprises: acquiring unit 610, pretreatment unit 620, converting unit 630, selected cell 640, generation unit 650 and correcting unit 660.
Wherein, acquiring unit 610 is used to obtain at least two video images, adjacent in twos video image has the overlapping region in described at least two video images, and described acquiring unit 610 specifically can be in the following way: first-class other the picture pick-up device of video camera, shooting is realized; Pretreatment unit 620 is used at least after described acquiring unit gets access to two video images, described at least two video images are carried out preliminary treatment, and described preliminary treatment comprises smoothing and noise-reducing process, and/or distortion correction is handled, wherein, pretreatment unit 620 is a selectable unit; Converting unit 630 is used for described at least two video images are carried out color space conversion, and the form of the video image before and after the described conversion comprises RGB or HSV or YUV or HSL or CIE-Lab or CIE-Luv or CMY or CMYK or XYZ; Selected cell 640 is used for selecting from described overlapping region the pairing characteristic point of described adjacent in twos video image; Generation unit 650 is used for generating according to described pairing characteristic point the color correction matrix of described adjacent in twos video image; Correcting unit 660 is used for by described color correction matrix the video image to be corrected that receives being proofreaied and correct.
Concrete, shown in Fig. 6 B, described selected cell 640 comprises a following unit at least: first selected cell 641 is used for the SIFT feature point detection is carried out in described overlapping region, described detected characteristic point is mated, and the many assembly that obtain the two adjacent video image are to characteristic point; Second selected cell 642, be used for the SIFT feature point detection is carried out in described overlapping region, described detected characteristic point is mated, the many assembly that obtain the two adjacent video image are to characteristic point, with described pairing characteristic point is that the identical zone of scope is divided at the center, gives described pairing characteristic point with the mean value assignment of the color characteristic in the zone of described division; The 3rd selected cell 643 is used for described overlapping region is cut apart, and the corresponding region of two video images after described cutting apart is as the pairing characteristic point, gives described pairing characteristic point with the mean value assignment of the color characteristic in the zone of described correspondence; The 4th selected cell 644, be used for receiving the region unit of choosing from described overlapping region by manually, the corresponding region piece of described two video images choosing is as the pairing characteristic point, gives described pairing characteristic point with the mean value assignment of the color characteristic of the region unit of described correspondence.Need to prove that for clarity of illustration, the selected cell 640 among Fig. 6 B has comprised above-mentioned all four unit, but in actual application, selected cell 640 can comprise as required wherein that at least one unit gets final product.
Concrete, shown in Fig. 6 C, suppose in the described adjacent in twos video image one as the source video image, another is as target video image, described generation unit 650 can comprise: color matrix is set up unit 651, be used for setting up respectively the color space matrix of described source video image and target video image, the color space attribute of a characteristic point in the described pairing characteristic point of each line display of described color space matrix; Transformation relation is set up unit 652, be used to set up the color space matrixing relation of the color space matrix and the target video image of described source video image, described transformation relation is: the color space matrix of described source video image and the product of described color correction matrix add that the margin of error equals the color space matrix of described target video image; Find the solution correction matrix unit 653, be used for obtaining when described margin of error color correction matrix hour according to described transformation relation.
Concrete, shown in Fig. 6 D, when described acquiring unit 610 obtained two video images, described correcting unit 660 can comprise: video image receiving element 661 is used to receive the video image to be corrected of the input unit transmission of the described source of input video image; Color matrix generation unit 662 is used to generate the color space matrix of described video image to be corrected; Color matrix converter unit 663 is used for the color space matrix multiple with described color correction matrix and described video image to be corrected, with the color space matrix of described multiplied result as the video image after proofreading and correct; Proofread and correct generation unit 664 as a result, be used for the video image after color space matrix according to the video image after the described correction generates described correction.
Concrete, shown in Fig. 6 E, when described acquiring unit 610 got access to the adjacent in twos video image of N, described N was the natural number greater than 2, a described N video image comprises the adjacent in twos video image of N-1 group, every group of corresponding color correction matrix of video image; Described correcting unit 660 can comprise: video image receiving element 661 ', be used to receive the video image to be corrected that input unit transmits, and described video image to be corrected is a K video image in the described N video image; The first color matrix generation unit 662 ' is used to generate the color space matrix of described video image to be corrected; Correction matrix generation unit 663 ' is used for the color correction matrix that order obtains K-1 color correction matrix multiple of first color correction matrix to the described video image to be corrected; The second color matrix generation unit 664 ' is used for the color space matrix multiple with described color correction matrix and described video to be corrected, with the color space matrix of described multiplied result as the video image after proofreading and correct; Proofread and correct generation unit 665 as a result, be used for the video image after color space matrix according to the video image after the described correction generates described correction.
Be the process of color correction in the example explanation embodiment of the invention below with five adjacent in twos video images to five video camera transmission, suppose that these five video images represent with F1, F2, F3, F4, F5 that respectively wherein per two adjacent video images are one group, being divided into is four groups, be one group of F1 and F2, be expressed as Z1, one group of F2 and F3 are expressed as Z2, one group of F3 and F4, be expressed as Z3, one group of F4 and F5 are expressed as Z4.
To step 207, find the solution the color correction matrix of Z1, Z2, Z3 and Z4 according to step 204 among preceding method second embodiment respectively, be expressed as Mat_1, Mat_2, Mat_3, Mat_4.
According to the color correction matrix of the video image that K-1 color correction matrix multiple of first color correction matrix to the can be accessed the input of K video camera as can be known, color correction matrix M the at_2 '=Mat_1 of second shot by camera image correspondence; The color correction matrix M at_3 '=Mat_1 * Mat_2 of the 3rd shot by camera image correspondence; The color correction matrix M at_4 '=Mat_1 * Mat_2 * Mat_3 of the 4th shot by camera image correspondence; The color correction matrix M at_5 '=Mat_1 * Mat_2 * Mat_3 * Mat_4 of the 5th shot by camera image correspondence.
Second video camera photographic images carried out timing, with the color space matrix multiple of color correction matrix M at_2 ' with captured image; The 3rd video camera photographic images carried out timing, with the color space matrix multiple of color correction matrix M at_3 ' with captured image; The 4th video camera photographic images carried out timing, with the color space matrix multiple of color correction matrix M at_4 ' with captured image; The 5th video camera photographic images carried out timing, with the color space matrix multiple of color correction matrix M at_5 ' with captured image.
The embodiment block diagram of multi-viewpoint video image color corrected system of the present invention as shown in Figure 7, this system comprises: at least two image-input devices 710 and image correction apparatus 720, for convenient described at least two image-input devices of example identify with 1 to N respectively, N is the natural number greater than 2.
Wherein, at least two image-input devices 710 are used for to two video images of described image correction apparatus 720 inputs at least, and adjacent in twos video image has the overlapping region in described at least two video images;
Described image correction apparatus 720 is used for selecting from described overlapping region the pairing characteristic point of described adjacent in twos video image, generate the color correction matrix of described adjacent in twos video image according to described pairing characteristic point, the video image to be corrected of described image-input device 710 transmission is proofreaied and correct by described color correction matrix.
Further, image correction apparatus 720 can also be used for described at least two video images are carried out preliminary treatment, and described preliminary treatment comprises smoothing and noise-reducing process, and/or distortion correction is handled.
Further, image correction apparatus 720 can also be used for described at least two video images are carried out color space conversion, and the form of the video image before and after the described conversion comprises RGB or HSV or YUV or HSL or CIE-Lab or CIE-Luv or CMY or CMYK or XYZ.
Description by the embodiment of the invention as can be known, the embodiment of the invention is selected the pairing characteristic point of adjacent in twos video image by the overlapping region of adjacent video image in twos, and according to the pairing characteristic point calculate the color correction matrix, it is overlapping that no matter to be corrected whether follow-up video image and target video image have, and can carry out color correction to video image to be corrected by this color correction matrix.Use the embodiment of the invention video image is carried out color correction, owing to only when calculating the color correction matrix, need to have between the image in twos the overlapping region, in trimming process, no matter between the video image whether the overlapping region is arranged, all can carry out color correction by this color correction matrix, and the color correction matrix only need generate once, has saved the time of video image being carried out color correction.
Those skilled in the art can be well understood to the present invention and can realize by the mode that software adds essential general hardware platform.Based on such understanding, the part that technical scheme of the present invention contributes to prior art in essence in other words can embody with the form of software product, this computer software product can be stored in the storage medium, as ROM/RAM, magnetic disc, CD etc., comprise that some instructions are with so that a computer equipment (can be a personal computer, server, the perhaps network equipment etc.) carry out the described method of some part of each embodiment of the present invention or embodiment.
Though described the present invention by embodiment, those of ordinary skills know, the present invention has many distortion and variation and do not break away from spirit of the present invention, wish that appended claim comprises these distortion and variation and do not break away from spirit of the present invention.

Claims (17)

1. a multi-viewpoint video image correction method is characterized in that, comprising:
Obtain at least two video images, adjacent in twos video image has the overlapping region in described at least two video images;
Select the pairing characteristic point of described adjacent in twos video image from described overlapping region;
Set up the color space matrix of described adjacent in twos video image respectively according to described pairing characteristic point, and the transformation relation of setting up two color space matrixes of per two adjacent video images, obtain the color correction matrix of described adjacent in twos video image according to described transformation relation;
By described color correction matrix the video image to be corrected that receives is proofreaied and correct.
2. method according to claim 1 is characterized in that, described obtain at least two video images after, also comprise: described at least two video images are carried out preliminary treatment;
Described preliminary treatment comprises: smoothing and noise-reducing process and/or distortion correction are handled.
3. method according to claim 1 and 2 is characterized in that, described obtain at least two video images after, also comprise: described at least two video images are carried out color space conversion;
The form of the described video image before and after the described color space conversion comprises: RGB or HSV or YUV or HSL or CIE-Lab or CIE-Luv or CMY or CMYK or XYZ.
4. method according to claim 1 is characterized in that, the described pairing characteristic point of described adjacent in twos video image of selecting from the overlapping region comprises:
The SIFT feature point detection is carried out in described overlapping region, described detected characteristic point is mated, the many assembly that obtain the two adjacent video image are to characteristic point; Or
The SIFT feature point detection is carried out in described overlapping region, described detected characteristic point is mated, the many assembly that obtain the two adjacent video image are to characteristic point, with described pairing characteristic point is that the identical zone of scope is divided at the center, gives described pairing characteristic point with the mean value assignment of the color characteristic in the zone of described division; Or
Described overlapping region is cut apart, and the corresponding region of two video images after described cutting apart is as the pairing characteristic point, gives described pairing characteristic point with the mean value assignment of the color characteristic in the zone of described correspondence; Or
The region unit of reception by manually choosing from described overlapping region, the corresponding region piece of described two video images choosing be as the pairing characteristic point, gives described pairing characteristic point with the mean value assignment of the color characteristic of the region unit of described correspondence.
5. method according to claim 1 is characterized in that, in the described adjacent in twos video image one another is as target video image as the source video image,
The described color space matrix of setting up described adjacent in twos video image according to described pairing characteristic point respectively, and the transformation relation of setting up two color space matrixes of per two adjacent video images, the color correction matrix of obtaining described adjacent in twos video image according to described transformation relation comprises:
Set up the color space matrix of described source video image and target video image respectively, the color space attribute of a characteristic point in the described pairing characteristic point of each line display of described color space matrix;
Set up the color space matrixing relation of the color space matrix and the target video image of described source video image, described transformation relation is: the color space matrix of described source video image and the product of described color correction matrix add that the margin of error equals the color space matrix of described target video image;
Obtain when described margin of error color correction matrix hour according to described transformation relation.
6. method according to claim 1 is characterized in that, when obtaining two video images,
The described video image to be corrected that receives the correction by the color correction matrix comprises:
Receive the video image to be corrected of the input unit transmission of the described source of input video image;
Generate the color space matrix of described video image to be corrected;
With the color space matrix multiple of described color correction matrix and described video image to be corrected, with the color space matrix of described multiplied result as the video image after proofreading and correct;
Generate video image after the described correction according to the color space matrix of the video image after the described correction.
7. method according to claim 1 is characterized in that, when getting access to the adjacent in twos video image of N, described N is the natural number greater than 2, a described N video image comprises the adjacent in twos video image of N-1 group, every group of corresponding color correction matrix of video image
The described video image to be corrected that receives the correction by the color correction matrix comprises:
Receive the video image to be corrected of input unit transmission, described video image to be corrected is a K video image in the described N video image;
Generate the color space matrix of described video image to be corrected;
Order obtains K-1 color correction matrix multiple of first color correction matrix to the in the color correction matrix of described video image to be corrected;
With the color space matrix multiple of described color correction matrix and described video to be corrected, with the color space matrix of described multiplied result as the video image after proofreading and correct;
Generate video image after the described correction according to the color space matrix of the video image after the described correction.
8. according to claim 1,2,4 to 7 any described methods, it is characterized in that described video image to be corrected is specially: at least two video images that do not have the overlapping region.
9. method according to claim 3 is characterized in that, described video image to be corrected is specially: at least two video images that do not have the overlapping region.
10. a multi-viewpoint video image means for correcting is characterized in that, comprising:
Acquiring unit is used to obtain at least two video images, and adjacent in twos video image has the overlapping region in described at least two video images;
Selected cell is used for selecting from described overlapping region the pairing characteristic point of described adjacent in twos video image;
Generation unit, be used for setting up respectively the color space matrix of described adjacent in twos video image according to described pairing characteristic point, and the transformation relation of setting up two color space matrixes of per two adjacent video images, obtain the color correction matrix of described adjacent in twos video image according to described transformation relation;
Correcting unit is used for by described color correction matrix the video image to be corrected that receives being proofreaied and correct.
11. device according to claim 10 is characterized in that, also comprises:
Pretreatment unit is used at least after described acquiring unit gets access to two video images, and described at least two video images are carried out preliminary treatment, and described preliminary treatment comprises smoothing and noise-reducing process, and/or distortion correction is handled.
12. according to claim 10 or 11 described devices, it is characterized in that, also comprise:
Converting unit is used for described at least two video images are carried out color space conversion, and the form of the video image before and after the described conversion comprises RGB or HSV or YUV or HSL or CIE-Lab or CIE-Luv or CMY or CMYK or XYZ.
13. device according to claim 10 is characterized in that, described selected cell comprises a following unit at least:
First selected cell is used for the SIFT feature point detection is carried out in described overlapping region, and described detected characteristic point is mated, and the many assembly that obtain the two adjacent video image are to characteristic point;
Second selected cell, be used for the SIFT feature point detection is carried out in described overlapping region, described detected characteristic point is mated, the many assembly that obtain the two adjacent video image are to characteristic point, with described pairing characteristic point is that the identical zone of scope is divided at the center, gives described pairing characteristic point with the mean value assignment of the color characteristic in the zone of described division;
The 3rd selected cell is used for described overlapping region is cut apart, and the corresponding region of two video images after described cutting apart is as the pairing characteristic point, gives described pairing characteristic point with the mean value assignment of the color characteristic in the zone of described correspondence;
The 4th selected cell, be used for receiving the region unit of choosing from described overlapping region by manually, the corresponding region piece of described two video images choosing is as the pairing characteristic point, gives described pairing characteristic point with the mean value assignment of the color characteristic of the region unit of described correspondence.
14. device according to claim 10 is characterized in that, in the described adjacent in twos video image one another is as target video image as the source video image,
Described generation unit comprises:
Color matrix is set up the unit, is used for setting up respectively the color space matrix of described source video image and target video image, the color space attribute of a characteristic point in the described pairing characteristic point of each line display of described color space matrix;
Transformation relation is set up the unit, be used to set up the color space matrixing relation of the color space matrix and the target video image of described source video image, described transformation relation is: the color space matrix of described source video image and the product of described color correction matrix add that the margin of error equals the color space matrix of described target video image;
Correction matrix is found the solution the unit, is used for obtaining when described margin of error color correction matrix hour according to described transformation relation.
15. device according to claim 10 is characterized in that, when described acquiring unit obtains two video images,
Described correcting unit comprises:
The video image receiving element is used to receive the video image to be corrected of the input unit transmission of the described source of input video image;
The color matrix generation unit is used to generate the color space matrix of described video image to be corrected;
The color matrix converter unit is used for the color space matrix multiple with described color correction matrix and described video image to be corrected, with the color space matrix of described multiplied result as the video image after proofreading and correct;
Proofread and correct generation unit as a result, be used for the video image after color space matrix according to the video image after the described correction generates described correction.
16. device according to claim 10, it is characterized in that, when described acquiring unit gets access to the adjacent in twos video image of N, described N is the natural number greater than 2, a described N video image comprises the adjacent in twos video image of N-1 group, every group of corresponding color correction matrix of video image;
Described correcting unit comprises:
The video image receiving element is used to receive the video image to be corrected that input unit transmits, and described video image to be corrected is a K video image in the described N video image;
The first color matrix generation unit is used to generate the color space matrix of described video image to be corrected;
The correction matrix generation unit is used for the color correction matrix that order obtains K-1 color correction matrix multiple of first color correction matrix to the described video image to be corrected;
The second color matrix generation unit is used for the color space matrix multiple with described color correction matrix and described video to be corrected, with the color space matrix of described multiplied result as the video image after proofreading and correct;
Proofread and correct generation unit as a result, be used for the video image after color space matrix according to the video image after the described correction generates described correction.
17. a multi-viewpoint video image corrective system is characterized in that, comprising: image correction apparatus and at least two image-input devices,
Described at least two image-input devices are used for to two video images of described image correction apparatus input at least, and adjacent in twos video image has the overlapping region in described at least two video images;
Described image correction apparatus, be used for selecting the pairing characteristic point of described adjacent in twos video image from described overlapping region, set up the color space matrix of described adjacent in twos video image respectively according to described pairing characteristic point, and the transformation relation of setting up two color space matrixes of per two adjacent video images, obtain the color correction matrix of described adjacent in twos video image according to described transformation relation, the video image to be corrected of described image-input device transmission is proofreaied and correct by described color correction matrix.
CN2009101186295A 2008-12-30 2009-02-26 Multi-viewpoint video image correction method, device and system Expired - Fee Related CN101820550B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN2009101186295A CN101820550B (en) 2009-02-26 2009-02-26 Multi-viewpoint video image correction method, device and system
EP09836013A EP2385705A4 (en) 2008-12-30 2009-12-08 Method and device for generating stereoscopic panoramic video stream, and method and device of video conference
PCT/CN2009/075383 WO2010075726A1 (en) 2008-12-30 2009-12-08 Method and device for generating stereoscopic panoramic video stream, and method and device of video conference
US13/172,193 US8717405B2 (en) 2008-12-30 2011-06-29 Method and device for generating 3D panoramic video streams, and videoconference method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009101186295A CN101820550B (en) 2009-02-26 2009-02-26 Multi-viewpoint video image correction method, device and system

Publications (2)

Publication Number Publication Date
CN101820550A CN101820550A (en) 2010-09-01
CN101820550B true CN101820550B (en) 2011-11-23

Family

ID=42655456

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009101186295A Expired - Fee Related CN101820550B (en) 2008-12-30 2009-02-26 Multi-viewpoint video image correction method, device and system

Country Status (1)

Country Link
CN (1) CN101820550B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102438153B (en) 2010-09-29 2015-11-25 华为终端有限公司 Multi-camera image correction method and equipment
CN102572450A (en) * 2012-01-10 2012-07-11 中国传媒大学 Three-dimensional video color calibration method based on scale invariant feature transform (SIFT) characteristics and generalized regression neural networks (GRNN)
CN102905147A (en) * 2012-09-03 2013-01-30 上海立体数码科技发展有限公司 Three-dimensional image correction method and apparatus
EP2741502A1 (en) 2012-12-07 2014-06-11 Thomson Licensing Method and apparatus for color transfer between images
CN104301608B (en) * 2014-09-22 2018-04-27 联想(北京)有限公司 Information processing method and electronic equipment
CN104794695B (en) * 2015-04-29 2017-11-21 北京明兰网络科技有限公司 Based on the method for handling three-dimensional house decoration material taken pictures
CN106254844B (en) * 2016-08-25 2018-05-22 成都易瞳科技有限公司 A kind of panoramic mosaic color calibration method
CN107730565B (en) * 2017-10-12 2020-10-20 浙江科技学院 OCT image-based material intrinsic spectral feature extraction method
CN108122199A (en) * 2017-12-19 2018-06-05 歌尔科技有限公司 The original image color method of adjustment and device of a kind of panorama camera
CN108600771B (en) * 2018-05-15 2019-10-25 东北农业大学 Recorded broadcast workstation system and operating method
CN109934786B (en) * 2019-03-14 2023-03-17 河北师范大学 Image color correction method and system and terminal equipment
CN112529784B (en) * 2019-09-18 2024-05-28 华为技术有限公司 Image distortion correction method and device
CN116193057B (en) * 2023-04-26 2023-07-07 广东视腾电子科技有限公司 Multi-port transmission optical fiber video extension method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101035261A (en) * 2007-04-11 2007-09-12 宁波大学 Image signal processing method of the interactive multi-view video system
CN101047867A (en) * 2007-03-20 2007-10-03 宁波大学 Method for correcting multi-viewpoint vedio color
CN101262606A (en) * 2008-01-16 2008-09-10 宁波大学 A processing method for multi-view point video

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101047867A (en) * 2007-03-20 2007-10-03 宁波大学 Method for correcting multi-viewpoint vedio color
CN101035261A (en) * 2007-04-11 2007-09-12 宁波大学 Image signal processing method of the interactive multi-view video system
CN101262606A (en) * 2008-01-16 2008-09-10 宁波大学 A processing method for multi-view point video

Also Published As

Publication number Publication date
CN101820550A (en) 2010-09-01

Similar Documents

Publication Publication Date Title
CN101820550B (en) Multi-viewpoint video image correction method, device and system
US8717462B1 (en) Camera with color correction after luminance and chrominance separation
David Stump Digital cinematography: fundamentals, tools, techniques, and workflows
US8737736B2 (en) Tone mapping of very large aerial image mosaic
CN107909553B (en) Image processing method and device
CN102778201A (en) Image processing device, image processing method, and program
JP2007183872A (en) Dynamic camera color correction device and video retrieving device using the same
US11350070B2 (en) Systems, methods and computer programs for colorimetric mapping
JP2018507620A (en) Method and apparatus for decoding color pictures
CN111312141B (en) Color gamut adjusting method and device
CN111491149B (en) Real-time image matting method, device, equipment and storage medium based on high-definition video
CN101321298B (en) Color gamut component analysis apparatus, method of analyzing color gamut component
CN104145477A (en) Method and system for color adjustment
CN101501751B (en) Display device, method for generating four or more primary color signals
US20170339316A1 (en) A method and device for estimating a color mapping between two different color-graded versions of a sequence of pictures
US20160286090A1 (en) Image processing method, image processing apparatus, and image processing program
US9665948B2 (en) Saturation compensation method
KR102285756B1 (en) Electronic system and image processing method
CN113450270A (en) Correction parameter generation method, electronic device, and storage medium
US8019153B2 (en) Wide luminance range colorimetrically accurate profile generation method
CN103489427A (en) Method and system for converting YUV into RGB and converting RGB into YUV
Lam et al. Automatic white balancing using luminance component and standard deviation of RGB components [image preprocessing]
US8390699B2 (en) Opponent color detail enhancement for saturated colors
US20030206180A1 (en) Color space rendering system and method
CN108304805A (en) A kind of big data image recognition processing system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20180212

Address after: California, USA

Patentee after: Tanous Co.

Address before: 518129 Longgang District, Guangdong, Bantian HUAWEI base B District, building 2, building No.

Patentee before: HUAWEI DEVICE Co.,Ltd.

Effective date of registration: 20180212

Address after: California, USA

Patentee after: Global innovation polymerization LLC

Address before: California, USA

Patentee before: Tanous Co.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20111123