WO2013164043A1 - Method and system for determining a color mapping model able to transform colors of a first view into colors of at least one second view - Google Patents

Method and system for determining a color mapping model able to transform colors of a first view into colors of at least one second view Download PDF

Info

Publication number
WO2013164043A1
WO2013164043A1 PCT/EP2012/076229 EP2012076229W WO2013164043A1 WO 2013164043 A1 WO2013164043 A1 WO 2013164043A1 EP 2012076229 W EP2012076229 W EP 2012076229W WO 2013164043 A1 WO2013164043 A1 WO 2013164043A1
Authority
WO
WIPO (PCT)
Prior art keywords
color
view
colors
feature
matching
Prior art date
Application number
PCT/EP2012/076229
Other languages
French (fr)
Inventor
Jürgen Stauder
Hasan SHEIKH FARIDUL
Alain Tremeau
Corinne Poree
Original Assignee
Thomson Licensing
Centre National De La Recherche Scientifique
Universite Jean Monnet
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing, Centre National De La Recherche Scientifique, Universite Jean Monnet filed Critical Thomson Licensing
Publication of WO2013164043A1 publication Critical patent/WO2013164043A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/133Equalising the characteristics of different image components, e.g. their average brightness or colour balance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/15Processing image signals for colour aspects of image signals

Definitions

  • the invention concerns a method and a system for determining a color mapping model able to transform colors of a first view into colors of at least one second view, and more specifically, for creating a color look up table from geometrically corresponding features in two images.
  • 3D video content In the framework of stereo and multiple view imaging, 3D video content needs to be created, processed and reproduced on a 3D-capable display screen. Processing of 3D video content allows creating or enhancing 3D information, for example by disparity estimation. Such a processing allows also enhancing 2D images using 3D information, for example by interpolation of views from different viewpoints. Often 3D video content is created from at least two captured 2D video views. By relating the at least two views of the same scene in a geometrical manner, 3D information can be extracted.
  • color differences between the at least two views of the same scene from different viewpoints are color differences between the at least two views of the same scene from different viewpoints. These color differences may result for example from physical light effects, from inconsistent color corrections in post-production, from uncalibrated cameras used to capture the different views, or from non-calibrated film scanners. It would be preferable if such color differences could be compensated.
  • compensation of such color differences would be helpful for many applications. For example, when a 3D video sequence is compressed, compensation of color differences can reduce the resulting bit rate. Another example is 3D analysis for disparity estimations in 3D video sequences. When color differences are compensated, disparity estimations can be more precise. Another example is 3D assets creation for visual effects in post- production. When color differences in a multi-view video sequence are compensated, extracted texture for 3D objects will have better color coherence.
  • Color mapping is generally composed into three steps as shown in figure 1 .
  • Color mapping generally starts with finding the geometric relationship between the views using feature matching. From these findings of feature matching, the relationship of colors between different views is established, called color correspondences. Color correspondences provide which colors from one view are corresponding with which colors from another view. Then, an appropriate color mapping model is chosen depending on the knowledge of how the colors are changed between the views. Finally, the color mapping model is fitted to the color correspondences by an estimation procedure.
  • Geometrical correspondences can be automatically extracted from images using known methods. For example, a well known method for detection of so-called feature correspondences is presented in the article entitled “Distinctive image features from scale invariant keypoints", authored by D. G. Lowe et al., and published in 2004 in the “Int. Journal of Computer Vision” Vol. 60(2), pp. 91 -1 10. This method, called “SIFT” (Scale Invariant Feature Transform) detects corresponding feature points in the different input images, by using a descriptor based on Difference of Gaussian (“DoG").
  • SIFT Scale Invariant Feature Transform
  • Color correspondences the second step of color mapping, usually extracts corresponding colors by utilizing the characteristics of the matched features generated by feature matching.
  • the computation of color correspondences has generally the following requirements:
  • the color mapping models that are typically used can be classified into parametric and non-parametric models. Details can be found in the article entitled “Performance evaluation of color correction approaches for automatic multi-view image and video stitching", authored by W. Xu and J. Mulligan, published in 2010 In Proc. CVPR'10, pages 263-270.
  • parametric model means that it can be described using a finite number of parameters.
  • Non-parametric models means that the model structure is not specified a priori but is instead determined from data.
  • non-parametric does not mean that such models completely lack parameters but that the number and nature of the parameters are flexible and not fixed in advance.
  • geometrical correspondences are not used. There is a case where precise geometrical correspondences are not meaningful because the input images do not show the same semantic scene but are just semantically close. For example, the colors of a first mountain shown in a first input image should be transformed by this color transfer into the colors of a second mountain, different from the first mountain, shown in a second input image. In another case, the two input images show the same semantic scene, but anyway, geometrical correspondences are not available. There are several reasons for that. First, for reasons of workflow order or computational time, geometrical correspondences may not be not available at the time of processing of color transfer. A second reason may be that the number of reliable geometrical correspondences is not sufficient for color transfer, for example in low textured images.
  • the document CN101673412 deals with color clustering and color cluster correspondences between two images.
  • the document EP1 107580 discloses a color mapping method which takes into account the spatial neighborhood of target pixels, i.e. the context within a local area around these pixels.
  • the invention aims at the optimization of sparse color correspondences extracted from sparse features by analyzing the spatial neighborhood of the features.
  • the invention proposes notably first to select by clustering few representative colors in each spatial neighborhood extracted from feature points, then to match these colors by optimizing color their color statistics.
  • the invention proposes notably a color mapping method that utilizes the spatial neighborhood of sparse features.
  • the subject of the invention is a method for generating, in an image processor, a list of correspondences between colors of a first view of a scene and colors of at least one second view of the same scene, the method comprising the following steps : - identify features in all these views,
  • each generated group match colors between corresponding color clusters of this group, in order to generate a list of correspondences between colors of the first view and colors of the at least one second view.
  • Each list of color correspondences is specific to a zone of these views comprising a given feature with its spatial neighborhood in these different views. It means that a group of colors of the reference view may have, in the test view, different corresponding colors, depending on the zone of the first view to which the colors of this group belong.
  • this list of color correspondences can be used to determine a color mapping model for the transformation of the colors of the first view into colors of the at least one second view.
  • the groups of corresponding color clusters that are used to perform the matching of colors are preferably selected, for instance according to a criterion based on the size of an area which is common to the color clusters of a group, for instance at least equal to or greater than 50% of the size of the smaller color cluster among those color clusters of a group.
  • the colors that are selected for matching in corresponding color clusters are preferably selected among colors satisfying a criterion based on a criterion using color cluster metrics, as, for instance, cluster sizes or areas, cluster means and cluster variance.
  • a subject of the invention is also an Image processor for generating a list of correspondences between colors of a first view of a scene and colors of at least one second view of the same scene, the method comprising the following steps :
  • - feature matching means configured to perform feature matching between features identified in these different views, such that at least one feature identified in the first view matches with a feature identified in the at least one second view, and
  • FIG. 1 illustrates a general scheme of a classical color mapping method according to the prior art, comprising three basic steps ;
  • FIG. 2 illustrates two views of the same scene, i.e. a reference view and a test view, to which the method according to the invention is embodied;
  • FIG. 4 is a magnification of the neighbourhoods of corresponding features shown in figure 3;
  • FIG. 5 shows the result of the color clustering of the neighbourhoods of figure 4 according to the fourth step of the main embodiment of the invention
  • FIG. 8 is a graphical representation of a list of color correspondences between colors of the red channel of the reference view and colors of the red channel of the test view of figure 2, obtained from the sixth step of the main embodiment of the invention ;
  • Figure 9 is a graphical representation of a list of correspondences between colors of the green channel of the reference view and colors of the green channel of the test view of figure 2, obtained from the sixth step of the main embodiment of the invention ;
  • - Figure 10 is a graphical representation of a list of correspondences between colors of the blue channel of the reference view and colors of the blue channel of the test view of figure 2, obtained from the sixth step of the main embodiment of the invention ;
  • - Figure 1 1 is a diagram showing the different steps of the method according to the main embodiment of the invention.
  • a reference view two views of the same scene are provided, as illustrated on figure 2: a reference view and a test view.
  • Reference view means that after color mapping, we expect that the colors of both views are as close as possible to the reference view.
  • the test view the view the colors of which will be mapped through the list of color correspondences generated according to the invention is called the test view.
  • the left column is related to the reference view whereas the right column is related to the test view.
  • a feature matching operation is performed between the features identified in these different views, using a scale and rotation invariant feature matching algorithm called SIFT, as described in the D.G. Lowe's article already mentioned above.
  • SIFT scale and rotation invariant feature matching algorithm
  • Corresponding patches along with their deformation parameters can then be matched.
  • the word "patch" corresponds to a spatial zone of a view comprising an identified feature and a geometrical neighborhood around this feature.
  • a patch coming from the reference image is called a reference patch whereas a patch coming from the test image is called test patch.
  • An example of such feature matching after applying the deformation parameters is shown in figures 2 and 3.
  • SIFT matching shown in figure 3 is not precisely aligned due to occlusion and SIFT's deformation parameter estimation. As SIFT points are not precisely aligned, errors of matching may occur. As an example, in figure 3, the SIFT estimation error due to location, angle and occlusion is highlighted with ellipses. In other words, the features matching usually generates errors that extend over more than one pixel. Note that, the application of deformation parameters such as scale and rotation makes both patches with the same size and nearly similar content. It means that the patches are nearly registered or nearly geometrically aligned.
  • a spatial neighborhood is selected around each identified feature in the reference view and in the test view.
  • a rectangular neighborhood of 15x15 pixels has been selected around the feature location.
  • Any other block of NxM pixels surrounding the identified feature can be selected, where N may or may not be equal to M.
  • N and M are preferably both inferior to 100 for image formats such as HDTV in order to not include parts of scene having significantly different object motion or object shape.
  • Figure 3 shows 15x15 neighborhoods as black rectangles that are magnified in figure 4.
  • the neighborhood around a SIFT matching feature is magnified by 2 for visualization purpose.
  • a fourth step of the invention color clustering of the identified spatial neighborhoods is performed in these different views in order to generate color clusters in these spatial neighborhoods.
  • Figure 5 shows the result of this color clustering.
  • a mean shift algorithm is used that is described in the article entitled "Mean shift: a robust approach toward feature space analysis", authored by D. Comaniciu and P. Meer, published in 2002 in Pattern Analysis and Machine Intelligence, IEEE Transactions Vol. 24(5), pp.603-619.
  • mean shift has splitted the neighborhood studied in two color clusters for both the reference and the test patches. Note that, the total number of color clusters in the reference patch and the total number of color clusters in the test patch may not be the same. Any other color clustering method can be used instead as the method described in the D. Comaniciu's article.
  • a corresponding color cluster is searched among the color clusters generated in the neighborhood of the matching feature found in test view. For searching such color correspondences between two color clusters generated in the neighborhood of corresponding features extracted in the two views, we look for the largest areas which are geometrically common to a color cluster generated in the neighborhood of a feature identified in the reference view and to a color cluster generated in the neighborhood of a corresponding feature identified in the test view, as described below.
  • Figures 6 and 7 show colors selected respectively in a first pair of corresponding color clusters and in a second pair of corresponding color clusters, these clusters being shown among others on figure 5.
  • the computing of color cluster correspondence can be for instance performed as follows.
  • the color clusters in the reference patch and the color clusters in the test patch are color labeled.
  • all positions i.e. all pixels, that are common to a color cluster of the reference patch and to a color cluster of the test patch, are extracted. For instance, for each color cluster of the reference patch, an alpha-mask is built in which alpha is equal to 1 for the pixels corresponding to this color cluster and in which alpha is equal to 0 out of the area of these pixels. Then, this alpha-mask is applied on the corresponding test patch - see the corresponding patches on figure 4 - by performing an "AND" operation between this alpha-mask and the test patch. The application of the alpha-mask selects an area of pixels which belong both to the color cluster of the reference patch and to the corresponding color cluster of the test patch.
  • This area of pixels is called "overlapping area”. Then, an extraction criterion is applied to these overlapping areas.
  • One extraction criterion that can be used is for example the size of the overlapping area between the color cluster from the reference patch and its counterpart from the test patch.
  • Other extraction criteria can be used, for example based on the shape of the clusters.
  • the common area between the color cluster of the reference patch and its corresponding cluster in the test patch is defined to be the part of the overlapping area having this most frequent label.
  • the sixth step of the invention is the matching of colors between corresponding color clusters.
  • the pairs of corresponding color clusters that are used to perform the matching of colors are preferably selected, for instance according to a criterion based on the size of the area which is common to the color clusters of a pair, this common area being for instance determined as described in the following.
  • a good candidate as a pair of corresponding color clusters to be used for the matching of colors would be a pair of color clusters having a common area, as computed above, which is at least equal to or greater than 50% of the size of the smaller cluster among those two clusters of the pair. If two corresponding clusters do not satisfy that condition, we assume that this pair is classified as a "bad candidate" in term of color clusters correspondence and will not be used for the matching of colors. On the other hand, if a pair of corresponding color clusters satisfies this condition, we assume that this pair is classified as "good candidate" in term of color clusters correspondence and will actually be used for the matching of colors as described below.
  • color cluster metrics Any method using color cluster metrics can be used to get this classification into "good” and “bad” candidates.
  • cluster size, cluster shape, common cluster area, cluster color mean and cluster color variance can be used.
  • the color channels are generally : R (red), G (green), and B (blue).
  • the list of correspondences between colors of the reference view and colors of the test view is used for example as follows to determine a color mapping model for the transformation of the colors of the reference view into colors of the test view.
  • the global color mapping model that we choose is based on a non linear function defined by three parameters as shown in equation 2.
  • c ref are the color coordinates of a color in the reference view
  • Ci ⁇ t are the color coordinates of a color in the test view .
  • Parameter G defines the gain
  • parameter Y defines the gamma
  • parameter b defines the offset of the non linear function. This function is usually called GOG (Gamma, Offset and Gain). Any other color mapping model can be chosen to implement the invention.
  • the robust estimation method is performed in two steps which are inspired from this ROUT method.
  • the model parameters are therefore computed from all color correspondences from the red channel (figure 8), from the green channel (figure 9) and from the blue channel (figure 10), respectively.
  • model parameters are estimated in a refinement step from inliers only, shown as light-grey inner dots on these figures.
  • the model parameters y b, G are estimated from the inliers using the least square method.
  • the black central line shows the estimated GOG ( Y , t», G) curve.
  • illumination we select first view from one exposure but the second view from another exposure. For example, we used two images both under
  • illumination3 but a first view (viewO) from exposure 1 (500ms) and a second view (view6) from exposure 2 (2000ms).
  • viewO first view
  • view6 second view
  • each color mapping method try to correct It ⁇ t view and produce an output of "corrected test" view, i CO rr» .t # ⁇ j_t # st -
  • the quality of the color mapping method is evaluated by comparing the Icererted test with the itrue as described below.
  • an evaluation framework is needed. This evaluation framework computes the remaining color differences between the color mapped view, and the ground truth, Itrue . If we assume that a color mapping method works well, these remaining color differences should be as low as possible. In other words, the less the color differences remain, the better is the color mapping method.
  • N is the total number of pixels of the image.
  • N °4 refers to the color transfer method disclosed by F. Pitie, A.C. Kokaram, and R. Dahyot, in "Automated colour grading using colour distribution transfer", published in 2007 in Computer Vision and Image
  • table 2 shows these average results which are the overall quality comparison of the four methods.
  • a sparse feature matching based color correspondence method has been proposed.
  • the invention allows the optimization of the neighborhood of sparse feature matching.
  • the invention notably proposes the clustering of the neighborhood, the computing of the color cluster correspondence, and the analysis of the local statistics of color cluster correspondences in color to get color correspondences. From our experimental result, we find that the proposed color correspondence method according to the invention can handle both the spatial precision as well as occlusion. Moreover, since this method captures colors from the neighborhood of the feature matching, we find it sufficient to generalize the color mapping model for rest of the colors where direct correspondences are not known.
  • the invention may be implemented in various forms of hardware, software, firmware, special purpose processors, or combinations thereof.
  • the invention may be notably implemented as a combination of hardware and software.
  • the software may be implemented as an application program tangibly embodied on a program storage unit.
  • the application program may be uploaded to, and executed by, a machine comprising any suitable architecture.
  • the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPU"), a random access memory (“RAM”), and input/output (“I/O”) interfaces.
  • CPU central processing units
  • RAM random access memory
  • I/O input/output
  • the computer platform may also include an operating system and microinstruction code.
  • various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU.
  • various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention aims at the optimization of sparse color correspondences extracted from sparse features by analyzing the spatial neighborhood of the features. The invention proposes notably first to select by clustering few representative colors in each spatial neighborhood extracted from feature points, then to match these colors notably by optimizing their color statistics.

Description

Method and system for determining a color mapping model able to transform colors of a first view into colors of at least one second view.
Technical Field :
The invention concerns a method and a system for determining a color mapping model able to transform colors of a first view into colors of at least one second view, and more specifically, for creating a color look up table from geometrically corresponding features in two images.
Background Art :
In the framework of stereo and multiple view imaging, 3D video content needs to be created, processed and reproduced on a 3D-capable display screen. Processing of 3D video content allows creating or enhancing 3D information, for example by disparity estimation. Such a processing allows also enhancing 2D images using 3D information, for example by interpolation of views from different viewpoints. Often 3D video content is created from at least two captured 2D video views. By relating the at least two views of the same scene in a geometrical manner, 3D information can be extracted.
Applications using multiple views of the same scene often suffer from color differences between different views. In video processing for stereo imaging, one issue is color differences between the at least two views of the same scene from different viewpoints. These color differences may result for example from physical light effects, from inconsistent color corrections in post-production, from uncalibrated cameras used to capture the different views, or from non-calibrated film scanners. It would be preferable if such color differences could be compensated.
The compensation of such color differences would be helpful for many applications. For example, when a 3D video sequence is compressed, compensation of color differences can reduce the resulting bit rate. Another example is 3D analysis for disparity estimations in 3D video sequences. When color differences are compensated, disparity estimations can be more precise. Another example is 3D assets creation for visual effects in post- production. When color differences in a multi-view video sequence are compensated, extracted texture for 3D objects will have better color coherence.
Known methods for the compensation of color differences in input images can be divided into two groups: color mapping and color transfer. Usually, when two images are processed for such a color compensation, a goal is to determine a color transform that allows transforming the colors of the first image into the colors of the second image.
Color mapping :
Color mapping is generally composed into three steps as shown in figure 1 . Color mapping generally starts with finding the geometric relationship between the views using feature matching. From these findings of feature matching, the relationship of colors between different views is established, called color correspondences. Color correspondences provide which colors from one view are corresponding with which colors from another view. Then, an appropriate color mapping model is chosen depending on the knowledge of how the colors are changed between the views. Finally, the color mapping model is fitted to the color correspondences by an estimation procedure.
In color mapping, it is assumed that geometrical correspondences between the input images are available in order to perform the first feature matching step. Geometrical correspondences can be automatically extracted from images using known methods. For example, a well known method for detection of so-called feature correspondences is presented in the article entitled "Distinctive image features from scale invariant keypoints", authored by D. G. Lowe et al., and published in 2004 in the "Int. Journal of Computer Vision" Vol. 60(2), pp. 91 -1 10. This method, called "SIFT" (Scale Invariant Feature Transform) detects corresponding feature points in the different input images, by using a descriptor based on Difference of Gaussian ("DoG"). From these correspondences, corresponding color are extracted from the input images and filled in a Color LookUp Table that is able to compensate the color differences. In the literature, to achieve color correspondences, both, sparse and dense feature matching are explored. Once again, for both types of feature matching, spatial precision is very crucial. Sparse feature matching might handle occlusion better than that of dense feature matching. However, sparse feature matching may not sufficiently represent the scene colors. On the other hand, though dense feature matching may address this problem, this later needs computational effort and may introduce additional errors due to occlusion of parts of the scene.
Color correspondences, the second step of color mapping, usually extracts corresponding colors by utilizing the characteristics of the matched features generated by feature matching. The computation of color correspondences has generally the following requirements:
• The spatial precision of feature matching in the different input views and thus the precision of color correspondences should be high enough. Here precision refers to the feature's deformation parameters such as feature location, feature scale and feature orientation;
• Color correspondences should be robust against occlusion that might be present within the spatial neighborhood of features;
• Outliers, i.e. badly matched features, should be as few as possible; · Color correspondence should sufficiently represent the scene colors so that color changes can be generalized for colors where direct color correspondences are not known.
In the last step of color mapping, the requirement for choosing an appropriate color mapping model is that it should describe the underlying physical, "true" color changes between views. Color mapping model parameter estimation needs to deal with limited precision and outliers in color correspondences.
The color mapping models that are typically used can be classified into parametric and non-parametric models. Details can be found in the article entitled "Performance evaluation of color correction approaches for automatic multi-view image and video stitching", authored by W. Xu and J. Mulligan, published in 2010 In Proc. CVPR'10, pages 263-270. Here, parametric model means that it can be described using a finite number of parameters. Non-parametric models means that the model structure is not specified a priori but is instead determined from data. The term non-parametric does not mean that such models completely lack parameters but that the number and nature of the parameters are flexible and not fixed in advance.
Color transfer :
In color transfer, geometrical correspondences are not used. There is a case where precise geometrical correspondences are not meaningful because the input images do not show the same semantic scene but are just semantically close. For example, the colors of a first mountain shown in a first input image should be transformed by this color transfer into the colors of a second mountain, different from the first mountain, shown in a second input image. In another case, the two input images show the same semantic scene, but anyway, geometrical correspondences are not available. There are several reasons for that. First, for reasons of workflow order or computational time, geometrical correspondences may not be not available at the time of processing of color transfer. A second reason may be that the number of reliable geometrical correspondences is not sufficient for color transfer, for example in low textured images.
One well known color transfer algorithm has been disclosed in the article entitled "Color Transfer between Images", authored by E. Reinhard, M. Ashikhmin, B. Gooch, and P. Shirley, published in 2001 in a special issue on Applied Perception of the IEEE Computer Graphics and Applications, Vol. 21 , No. 5, pp 34-41 . This article proposes to transfer the first and second order image signal statics from the reference image to the corresponding target image. In order to be able to process the color channels separately, using an empirical decorrelated color space is described.
Usually color transfer methods are suitable for artistic color change, automatic color grading grayscale image colorization by example. Conclusion concerning background art :
The document CN101673412 deals with color clustering and color cluster correspondences between two images. The document EP1 107580 discloses a color mapping method which takes into account the spatial neighborhood of target pixels, i.e. the context within a local area around these pixels.
Known methods of color mapping using sparse features extract color correspondences from the colors of the feature points only. Therefore, the spatial precision of feature matching and thus the precision of color correspondences becomes low. A second problem in literature is that sparse features do not sufficiently represent the scene's colors so that not all color changes can be generalized where direct color correspondence is not known. Finally, the third problem in literature is the lack of quantitative results. Often, results are reported as images only and evaluation is purely subjective.
Summary of invention :
Notably but not only to address the first and second problems, the invention aims at the optimization of sparse color correspondences extracted from sparse features by analyzing the spatial neighborhood of the features. The invention proposes notably first to select by clustering few representative colors in each spatial neighborhood extracted from feature points, then to match these colors by optimizing color their color statistics.
As a summary, the invention proposes notably a color mapping method that utilizes the spatial neighborhood of sparse features.
As an appendix, in order to address the third problem, it is proposed a color difference based quality evaluation framework for color mapping which allows quantitative comparison of quantitative results of color quality.
Addressing notably these problems, the subject of the invention is a method for generating, in an image processor, a list of correspondences between colors of a first view of a scene and colors of at least one second view of the same scene, the method comprising the following steps : - identify features in all these views,
- perform feature matching between features identified in these different views, such that at least one feature identified in the first view matches with a feature identified in the at least one second view,
then, for each feature identified in the first view matching with a feature identified in the at least one second view :
- select a spatial neighborhood of this feature identified in the first view and a spatial neighborhood of the matching feature in the at least one second view,
- color cluster these selected neighborhoods in order to generate color clusters,
- for each color cluster generated in the neighborhood of said feature identified in the first view, searching a corresponding color cluster among the color clusters generated in the neighborhood of said matching feature in the at least one second view, such as to generate groups of corresponding color clusters,
- in each generated group, match colors between corresponding color clusters of this group, in order to generate a list of correspondences between colors of the first view and colors of the at least one second view.
Each list of color correspondences is specific to a zone of these views comprising a given feature with its spatial neighborhood in these different views. It means that a group of colors of the reference view may have, in the test view, different corresponding colors, depending on the zone of the first view to which the colors of this group belong.
Advantageously, the generation of this list of color correspondences, that can be edited as a Lookup table, can be used to determine a color mapping model for the transformation of the colors of the first view into colors of the at least one second view. As the invention allow to build correspondences between colors that may depend on positions of the pixels associated to these colors, this color mapping model is such that a color Ci of a pixel P in the first view corresponds to a color C2 of a pixel P' in a second view according to a function that not only depends on C1 but also on P : C2 = f(C1 ; P). The groups of corresponding color clusters that are used to perform the matching of colors are preferably selected, for instance according to a criterion based on the size of an area which is common to the color clusters of a group, for instance at least equal to or greater than 50% of the size of the smaller color cluster among those color clusters of a group.
The colors that are selected for matching in corresponding color clusters are preferably selected among colors satisfying a criterion based on a criterion using color cluster metrics, as, for instance, cluster sizes or areas, cluster means and cluster variance.
A subject of the invention is also an Image processor for generating a list of correspondences between colors of a first view of a scene and colors of at least one second view of the same scene, the method comprising the following steps :
- means for identifying features in all these views,
- feature matching means configured to perform feature matching between features identified in these different views, such that at least one feature identified in the first view matches with a feature identified in the at least one second view, and
- means configured to, for each feature identified in the first view matching with a feature identified in the at least one second view :
- select a spatial neighborhood of this feature identified in the first view and a spatial neighborhood of the matching feature in the at least one second view,
- color cluster these selected neighborhoods in order to generate color clusters,
- for each color cluster generated in the neighborhood of said feature identified in the first view, searching a corresponding color cluster among the color clusters generated in the neighborhood of said matching feature in the at least one second view, such as to generate groups of corresponding color clusters,
- in each generated group, match colors between corresponding color clusters of this group, in order to generate a list of correspondences between colors of the first view and colors of the at least one second view. Brief description of drawings :
The invention will be more clearly understood on reading the description which follows, given by way of non-limiting example and with reference to the appended figures in which:
- Figure 1 illustrates a general scheme of a classical color mapping method according to the prior art, comprising three basic steps ;
- Figure 2 illustrates two views of the same scene, i.e. a reference view and a test view, to which the method according to the invention is embodied;
- Figure 3 illustrates matching matches showing matching features according to the second step of the main embodiment of the invention;
- Figure 4 is a magnification of the neighbourhoods of corresponding features shown in figure 3;
- Figure 5 shows the result of the color clustering of the neighbourhoods of figure 4 according to the fourth step of the main embodiment of the invention;
- Figure 6 shows colors selected in a first pair of corresponding color clusters found in the result of figure 5;
- Figure 7 shows colors selected in a second pair of corresponding color clusters found in the result of figure 5;
- Figure 8 is a graphical representation of a list of color correspondences between colors of the red channel of the reference view and colors of the red channel of the test view of figure 2, obtained from the sixth step of the main embodiment of the invention ;
- Figure 9 is a graphical representation of a list of correspondences between colors of the green channel of the reference view and colors of the green channel of the test view of figure 2, obtained from the sixth step of the main embodiment of the invention ;
- Figure 10 is a graphical representation of a list of correspondences between colors of the blue channel of the reference view and colors of the blue channel of the test view of figure 2, obtained from the sixth step of the main embodiment of the invention ; - Figure 1 1 is a diagram showing the different steps of the method according to the main embodiment of the invention.
Description of embodiments :
The different steps of the invention and the functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. Such hardware and software can be easily implemented by man skilled in the art reading the invention, and notably, the below embodiment. Such hardware and software form notably an image processor.
For the implementation of the non-limiting embodiment that will be now described, two views of the same scene are provided, as illustrated on figure 2: a reference view and a test view. Between the reference view and the test view, there is not only an overall color differences but also a difference of viewpoint. Reference view means that after color mapping, we expect that the colors of both views are as close as possible to the reference view. On the other hand, the view the colors of which will be mapped through the list of color correspondences generated according to the invention is called the test view. In figures 2 to 7, the left column is related to the reference view whereas the right column is related to the test view.
In a first step of the invention, features are identified in these views, and, in a second step, a feature matching operation is performed between the features identified in these different views, using a scale and rotation invariant feature matching algorithm called SIFT, as described in the D.G. Lowe's article already mentioned above. Corresponding patches along with their deformation parameters (location, scale and orientation) can then be matched. The word "patch" corresponds to a spatial zone of a view comprising an identified feature and a geometrical neighborhood around this feature. A patch coming from the reference image is called a reference patch whereas a patch coming from the test image is called test patch. An example of such feature matching after applying the deformation parameters is shown in figures 2 and 3. Note that, though the features shown in the two patches of figure 3 seem to match quite well, the match is not perfect. SIFT matching shown in figure 3 is not precisely aligned due to occlusion and SIFT's deformation parameter estimation. As SIFT points are not precisely aligned, errors of matching may occur. As an example, in figure 3, the SIFT estimation error due to location, angle and occlusion is highlighted with ellipses. In other words, the features matching usually generates errors that extend over more than one pixel. Note that, the application of deformation parameters such as scale and rotation makes both patches with the same size and nearly similar content. It means that the patches are nearly registered or nearly geometrically aligned.
In a third step of the invention, a spatial neighborhood is selected around each identified feature in the reference view and in the test view. Here, a rectangular neighborhood of 15x15 pixels has been selected around the feature location. Any other block of NxM pixels surrounding the identified feature can be selected, where N may or may not be equal to M. N and M are preferably both inferior to 100 for image formats such as HDTV in order to not include parts of scene having significantly different object motion or object shape. Figure 3 shows 15x15 neighborhoods as black rectangles that are magnified in figure 4. Here, the neighborhood around a SIFT matching feature is magnified by 2 for visualization purpose.
In a fourth step of the invention, color clustering of the identified spatial neighborhoods is performed in these different views in order to generate color clusters in these spatial neighborhoods. Figure 5 shows the result of this color clustering. As shown on this figure 5, for the color clustering (i.e. color patch segmentation) of the neighborhoods selected in the reference and the test patches, a mean shift algorithm is used that is described in the article entitled "Mean shift: a robust approach toward feature space analysis", authored by D. Comaniciu and P. Meer, published in 2002 in Pattern Analysis and Machine Intelligence, IEEE Transactions Vol. 24(5), pp.603-619. In this figure 5, mean shift has splitted the neighborhood studied in two color clusters for both the reference and the test patches. Note that, the total number of color clusters in the reference patch and the total number of color clusters in the test patch may not be the same. Any other color clustering method can be used instead as the method described in the D. Comaniciu's article.
In a fifth step of the invention, for each color cluster generated in the neighborhood of a feature identified in the reference view, a corresponding color cluster is searched among the color clusters generated in the neighborhood of the matching feature found in test view. For searching such color correspondences between two color clusters generated in the neighborhood of corresponding features extracted in the two views, we look for the largest areas which are geometrically common to a color cluster generated in the neighborhood of a feature identified in the reference view and to a color cluster generated in the neighborhood of a corresponding feature identified in the test view, as described below. Figures 6 and 7 show colors selected respectively in a first pair of corresponding color clusters and in a second pair of corresponding color clusters, these clusters being shown among others on figure 5.
The computing of color cluster correspondence can be for instance performed as follows.
First, the color clusters in the reference patch and the color clusters in the test patch are color labeled. Next, for each color label of a color cluster of the reference patch, we have to try to find out a corresponding color label of a color cluster of the test patch.
In a first sub-step, all positions, i.e. all pixels, that are common to a color cluster of the reference patch and to a color cluster of the test patch, are extracted. For instance, for each color cluster of the reference patch, an alpha-mask is built in which alpha is equal to 1 for the pixels corresponding to this color cluster and in which alpha is equal to 0 out of the area of these pixels. Then, this alpha-mask is applied on the corresponding test patch - see the corresponding patches on figure 4 - by performing an "AND" operation between this alpha-mask and the test patch. The application of the alpha-mask selects an area of pixels which belong both to the color cluster of the reference patch and to the corresponding color cluster of the test patch. This area of pixels is called "overlapping area". Then, an extraction criterion is applied to these overlapping areas. One extraction criterion that can be used is for example the size of the overlapping area between the color cluster from the reference patch and its counterpart from the test patch. Other extraction criteria can be used, for example based on the shape of the clusters.
In a second sub-step, we compute the most frequent color label among the color labels of the color clusters included in the extracted overlapping area, since this color cluster label best represents that extracted overlapping area. The color cluster associated with this most frequent color label is then considered as and called the "corresponding cluster".
Finally, the common area between the color cluster of the reference patch and its corresponding cluster in the test patch is defined to be the part of the overlapping area having this most frequent label.
The correspondence between color clusters is now obtained, giving one or more pairs of corresponding color clusters for each patch.
Having searched, as described above, for correspondences between color clusters in the reference view and color clusters in the test view and having defined their common area, the sixth step of the invention is the matching of colors between corresponding color clusters.
Knowing from the previous step which color cluster (which label) of the reference patch corresponds to which corresponding cluster (which label) of the test patch, we will now discuss :
- whether the color clusters which are considered as "corresponding" from the previous step, are classified as "good candidates" or as "bad candidates" for color correspondences ; - how to make color correspondences from "good candidates" computed from color cluster correspondences.
As noted before, we face couple of challenges in these tasks, notably occlusions, bad alignment between reference and test patch, and
unbalanced size of areas associated to corresponding clusters due to color differences.
To face these issues, the pairs of corresponding color clusters that are used to perform the matching of colors are preferably selected, for instance according to a criterion based on the size of the area which is common to the color clusters of a pair, this common area being for instance determined as described in the following. We first consider that a good candidate as a pair of corresponding color clusters to be used for the matching of colors would be a pair of color clusters having a common area, as computed above, which is at least equal to or greater than 50% of the size of the smaller cluster among those two clusters of the pair. If two corresponding clusters do not satisfy that condition, we assume that this pair is classified as a "bad candidate" in term of color clusters correspondence and will not be used for the matching of colors. On the other hand, if a pair of corresponding color clusters satisfies this condition, we assume that this pair is classified as "good candidate" in term of color clusters correspondence and will actually be used for the matching of colors as described below.
Any method using color cluster metrics can be used to get this classification into "good" and "bad" candidates. Among color cluster metrics, cluster size, cluster shape, common cluster area, cluster color mean and cluster color variance can be used.
Now, we will infer the matching of colors from the pairs of corresponding color clusters which are selected as "good candidates". Assuming that, in such corresponding color clusters of a selected pair, the color distribution can be modeled by a normal distribution, then, among the colors in these corresponding color clusters and in each color channel, we select the colors satisfying a criterion based on a relationship using color cluster metrics, here the relationship using the mean μ and the standard deviation σ of colors in a given color channel :
The color channels are generally : R (red), G (green), and B (blue).
Inside a color cluster, if the color of a pixel expressed in the different color channels - i.e. "colors (channel)" - satisfies equation 1 above, we call it "stable" color. It simply means for all the color channels and both for color clusters of the reference and test views that this pixel follows a normal distribution and therefore within 2σ from the mean μ.
Finally, these selected "stable" colors are pushed into a list of final color correspondences, i.e. color matching, that forms a look-up table. An example of the head of such a look-up table is given in Table 1 .
Table 1
Figure imgf000016_0001
If the above process is performed for each pair of corresponding patches, i.e. for each feature identified in the reference view matching with a feature identified in the test view, a list of correspondences between colors of the first view and colors of the at least one second view is obtained.
Optionally, in a last step, the list of correspondences between colors of the reference view and colors of the test view is used for example as follows to determine a color mapping model for the transformation of the colors of the reference view into colors of the test view.
Note that, to address the complete process of color mapping determination, let us also mention how we plan to overcome the main limitations to determine the final color mapping model. 1 . Going through the different steps above, we get the color correspondences from the color optimization of the spatial neighborhood present around a sparse feature matching.
2. We choose a simple three parameter color mapping model: see equation 2 below.
3. Preferably, we perform robust parameter estimation by a method called ROUT in order to handle outliers in color correspondences. Such a method is described in the article entitled "Detecting outliers when fitting data with nonlinear regression-a new method based on robust nonlinear regression and the false discovery rate", authored by H. Motulsky and R. Brown, published in 2006 in BMC bioinformatics, Vol. 7(1 ), p.123.
The global color mapping model that we choose is based on a non linear function defined by three parameters as shown in equation 2.
Here, cref are the color coordinates of a color in the reference view and
Ci^t are the color coordinates of a color in the test view . Parameter G defines the gain, parameter Y defines the gamma and parameter b defines the offset of the non linear function. This function is usually called GOG (Gamma, Offset and Gain). Any other color mapping model can be chosen to implement the invention.
In the following, we explain how to robustly estimate the parameters of this GOG color mapping model. As example, let us consider the set of color correspondences shown in table 1 . In figures 8, 9 and 10, we show the color correspondences for respectively each of the three color channels R, G and B, for the two views of the same scene illustrated on figure 2, using the proposed color correspondence method described above. In this figure, the X horizontal axis refers to the test view whereas the Y vertical axis refers to the reference view. Using these color correspondences, the issue is to robustly estimate the model parameters y h G. Here robust means that the estimation should preferably not be influenced by the outliers.
Here we face two main challenges. Firstly, we hypothesize that the more the color correspondences contain lot of outliers, as it is usually the case, the more the least square estimation method is erroneous. Secondly, considering a general color mapping model which may be responsible for color difference, the estimation method should be non linear. Therefore, to address these two main challenges, we propose to use a robust non linear regression model which is based upon a method already referenced about, called "ROUT". However, other methods can be used as well.
The robust estimation method is performed in two steps which are inspired from this ROUT method. First, we iteratively classify corresponding colors into outliers and inliers. The model parameters are therefore computed from all color correspondences from the red channel (figure 8), from the green channel (figure 9) and from the blue channel (figure 10), respectively. Second, outliers, shown as dark-grey outer dots on these figures, are detected and removed.
Finally, the model parameters are estimated in a refinement step from inliers only, shown as light-grey inner dots on these figures. For each of the color channels, we estimate the model parameters y b, G from the inliers using the least square method. In figures 8, 9 and 10, the black central line shows the estimated GOG (Y, t», G) curve. The determination of a color mapping model for the transformation of the colors of the reference view into colors of the test view is then entirely performed.
Advantage of the invention over other methods of determination of a color mapping model :
In order to evaluate several color mapping methods, we need not only views with color differences but also its ground truth. Having this ground truth allows us to assess how a color mapping method performs. Since we have the ground truth, we can compare several color mapping methods. In this experiment, we used Middlebury stereo dataset 2006, as disclosed by D. Scharstein and C. Pal, in "Learning conditional random fields for stereo", published in 2007 in IEEE Conference on Computer Vision and Pattern Recognition, 2007. CVPRO7, pages 1 -8, 2007. We assume that in this Middlebury stereo dataset, there are no color differences between views having the same settings (same illumination and same exposure).
Now, to obtain views with color differences under the same
illumination, we select first view from one exposure but the second view from another exposure. For example, we used two images both under
illumination3, but a first view (viewO) from exposure 1 (500ms) and a second view (view6) from exposure 2 (2000ms). Here, due to our choice of different exposure, we are able to create views with color differences and we know their ground truth as well.
All color mapping methods are given these ir f and view as input.
After estimating the color mapping model, each color mapping method try to correct It∞t view and produce an output of "corrected test" view, iCOrr» .t#<j_t#st -
Finally, the quality of the color mapping method is evaluated by comparing the Icererted test with the itrue as described below. To compare the quality of different color mapping methods, an evaluation framework is needed. This evaluation framework computes the remaining color differences between the color mapped view,
Figure imgf000019_0001
and the ground truth, Itrue . If we assume that a color mapping method works well, these remaining color differences should be as low as possible. In other words, the less the color differences remain, the better is the color mapping method.
We would like to compare the remaining color differences as if a human observer would have compared. That is why, we use human color vision based CIE 2000 color-difference formula, CIEDE2000. See M.R. Luo, G. Cui, and B. Rigg, "The development of the cie 2000 colour-difference formula: Ciede2000", published in 2001 in Color Research & Application, Vol.26(5), pp340-350. For each pixel of the image, we compute the remaining color differences after color mapping, PE0¾ft„Ci i), between Uest and wted test where ( 3 is the pixel's index. We have : DE00a£te(i § = QEDE2000 (i^ti }X icmi^J:est(i j) J )
Similarly the color differences before color mapping, DE00B.FORT (U), are computed as follows : Dftore ' t 1 m 1
If the color mapping method can compensate efficiently the color differences, we expect that DE004ft„(i,j)« PE0¾ef£„Ci j) In this color difference computation, each pixel contributes a single color difference. An average of all these color differences results in a single metric of color difference. A single metric "image color difference" can then be computed according to the equation :
Figure imgf000020_0001
where N is the total number of pixels of the image.
Finally, we evaluate the quality of several color mapping methods by comparing the computed values of iiiff .
Experimental results :
We propose to compare the color mapping determination method according to the invention, labeled method N °2 below, with the following other color mapping determination methods :
- the method N ° 1 using only the sparse feature poirt without color clustering as disclosed by Mehrdad Panahpour Tehrani, Akio Ishikawa, Shigeyuki Sakazawa, and Atsushi Koike, in "Iterative colour correction of multicamera systems using corresponding feature points", published in 2010 in the
Journal of Visual Communication and Image Representation, Vol. 21 (5-6), pp. 377 - 391 - Special issue on Multi-camera Imaging, Coding and
Innovative Display; - the method N °3 using color correspondence methodwhere dense feature matching is used such as NRDC, by Yoav HaCohen, Eli Shechtman, Dan B Goldman, and Dani Lischinski, in "Non-rigid dense correspondence with applications for image enhancement", published in 201 1 in ACM
Transactions on Graphics (Proceedings of ACM SIGGRAPH 201 1 ),
30(4):70:1 -70:9;
- the method N °4 refers to the color transfer method disclosed by F. Pitie, A.C. Kokaram, and R. Dahyot, in "Automated colour grading using colour distribution transfer", published in 2007 in Computer Vision and Image
Understanding, 107(1 -2):123-137.
To conduct the experiment, we selected 13 stereo pairs from the
Middlebury stereo dataset 2006. Each method under experiment received those 13 stereo pairs and computed a set of color mapped view separately.
All those color mapped views are compared with the ground truth to calculate the remaining color differences in deltaEOO unit as described above. Note that, color difference less than 1 deltaEOO unit is barely noticeable to human eye.
Since it is difficult to compare these methods from one image to another, we compute the average of the average remaining color differences for all 13 stereo pairs. Therefore, table 2 shows these average results which are the overall quality comparison of the four methods.
Table 2
Figure imgf000021_0001
We can see in table 2 that the method N °2 according to the invention clearly outperforms the other methods. Conclusion :
A sparse feature matching based color correspondence method according to the invention has been proposed. To address the problem of spatial precision of feature matching, occlusion and sufficient representation of scene colors, the invention allows the optimization of the neighborhood of sparse feature matching.
The invention notably proposes the clustering of the neighborhood, the computing of the color cluster correspondence, and the analysis of the local statistics of color cluster correspondences in color to get color correspondences. From our experimental result, we find that the proposed color correspondence method according to the invention can handle both the spatial precision as well as occlusion. Moreover, since this method captures colors from the neighborhood of the feature matching, we find it sufficient to generalize the color mapping model for rest of the colors where direct correspondences are not known.
Based on comparisons, we stated that, on average, the method according to the invention clearly outperforms other methods.
It is to be understood that the invention may be implemented in various forms of hardware, software, firmware, special purpose processors, or combinations thereof. The invention may be notably implemented as a combination of hardware and software. Moreover, the software may be implemented as an application program tangibly embodied on a program storage unit. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units ("CPU"), a random access memory ("RAM"), and input/output ("I/O") interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit.
While the present invention is described with respect to particular examples and preferred embodiments, it is understood that the present invention is not limited to these examples and embodiments. The present invention as claimed therefore includes variations from the particular examples and preferred embodiments described herein, as will be apparent to one of skill in the art. While some of the specific embodiments may be described and claimed separately, it is understood that the various features of embodiments described and claimed herein may be used in combination. Reference numerals appearing in the claims are by way of illustration only and shall have no limiting effect on the scope of the claims.

Claims

1 . Method for generating, in an image processor, a list of correspondences between colors of a first view of a scene and colors of at least one second view of the same scene, the method comprising
the following steps :
- identify features in all these views,
- perform feature matching between features identified in these different views, such that at least one feature identified in the first view matches with a feature identified in the at least one second view,
then, for each feature identified in the first view matching with a feature identified in the at least one second view :
- select a spatial neighborhood of this feature identified in the first view and a spatial neighborhood of the matching feature in the at least one second view,
- color cluster these selected neighborhoods in order to generate color clusters,
- for each color cluster generated in the neighborhood of said feature identified in the first view, searching a corresponding color cluster among the color clusters generated in the neighborhood of said matching feature in the at least one second view, such as to generate groups of corresponding color clusters,
- in each generated group, match colors between corresponding color clusters of this group, in order to generate a list of correspondences between colors of the first view and colors of the at least one second view.
2. Image processor for generating a list of correspondences between colors of a first view of a scene and colors of at least one second view of the same scene, the method comprising the following steps :
- means for identifying features in all these views,
- feature matching means configured to perform feature matching between features identified in these different views, such that at least one feature identified in the first view matches with a feature identified in the at least one second view, and - means configured to, for each feature identified in the first view matching with a feature identified in the at least one second view :
- select a spatial neighborhood of this feature identified in the first view and a spatial neighborhood of the matching feature in the at least one second view, - color cluster these selected neighborhoods in order to generate color clusters,
- for each color cluster generated in the neighborhood of said feature identified in the first view, searching a corresponding color cluster among the color clusters generated in the neighborhood of said matching feature in the at least one second view, such as to generate groups of corresponding color clusters,
- in each generated group, match colors between corresponding color clusters of this group, in order to generate a list of correspondences between colors of the first view and colors of the at least one second view.
PCT/EP2012/076229 2012-05-03 2012-12-19 Method and system for determining a color mapping model able to transform colors of a first view into colors of at least one second view WO2013164043A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP12305499.1 2012-05-03
EP12305499 2012-05-03

Publications (1)

Publication Number Publication Date
WO2013164043A1 true WO2013164043A1 (en) 2013-11-07

Family

ID=47504957

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2012/076229 WO2013164043A1 (en) 2012-05-03 2012-12-19 Method and system for determining a color mapping model able to transform colors of a first view into colors of at least one second view

Country Status (1)

Country Link
WO (1) WO2013164043A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015086530A1 (en) * 2013-12-10 2015-06-18 Thomson Licensing Method for compensating for color differences between different images of a same scene
EP3001668A1 (en) * 2014-09-24 2016-03-30 Thomson Licensing Method for compensating for color differences between different images of a same scene
CN106650755A (en) * 2016-12-26 2017-05-10 哈尔滨工程大学 Feature extraction method based on color feature
US10262441B2 (en) 2015-02-18 2019-04-16 Qualcomm Incorporated Using features at multiple scales for color transfer in augmented reality
CN109919899A (en) * 2017-12-13 2019-06-21 香港纺织及成衣研发中心有限公司 The method for evaluating quality of image based on multispectral imaging

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1107580A2 (en) 1999-12-08 2001-06-13 Xerox Corporation Gamut mapping
CN101673412A (en) 2009-09-29 2010-03-17 浙江工业大学 Light template matching method of structured light three-dimensional vision system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1107580A2 (en) 1999-12-08 2001-06-13 Xerox Corporation Gamut mapping
CN101673412A (en) 2009-09-29 2010-03-17 浙江工业大学 Light template matching method of structured light three-dimensional vision system

Non-Patent Citations (14)

* Cited by examiner, † Cited by third party
Title
D. COMANICIU; P. MEER: "Mean shift: a robust approach toward feature space analysis", PATTERN ANALYSIS AND MACHINE INTELLIGENCE, IEEE TRANSACTIONS, vol. 24, no. 5, 2002, pages 603 - 619, XP002323848, DOI: doi:10.1109/34.1000236
D. G. LOWE ET AL.: "Distinctive image features from scale invariant keypoints", INT. JOURNAL OF COMPUTER VISION, vol. 60, no. 2, 2004, pages 91 - 110, XP019216426, DOI: doi:10.1023/B:VISI.0000029664.99615.94
D. SCHARSTEIN; C. PAL: "Learning conditional random fields for stereo", IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, 2007. CVPR'07, 2007, pages 1 - 8, XP031114448
E. REINHARD; M. ASHIKHMIN; B. GOOCH; P. SHIRLEY: "Color Transfer between Images", APPLIED PERCEPTION OF THE IEEE COMPUTER GRAPHICS AND APPLICATIONS, vol. 21, no. 5, 2001, pages 34 - 41, XP002728545
F. PITIE; A.C. KOKARAM; R. DAHYOT: "Automated colour grading using colour distribution transfer", COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 107, no. 1-2, 2007, pages 123 - 137, XP022103085, DOI: doi:10.1016/j.cviu.2006.11.011
H. MOTULSKY; R. BROWN: "Detecting outliers when fitting data with nonlinear regression-a new method based on robust nonlinear regression and the false discovery rate", BMC BIOINFORMATICS, vol. 7, no. 1, 2006, pages 123, XP021013626, DOI: doi:10.1186/1471-2105-7-123
HASAN S F ET AL: "Robust Color Correction for Stereo", VISUAL MEDIA PRODUCTION (CVMP), 2011 CONFERENCE FOR, IEEE, 16 November 2011 (2011-11-16), pages 101 - 108, XP032074523, ISBN: 978-1-4673-0117-6, DOI: 10.1109/CVMP.2011.18 *
KENJI YAMAMOTO ET AL: "Color correction for multi-view video using energy minimization of view networks", INTERNATIONAL JOURNAL OF AUTOMATION AND COMPUTING, vol. 5, no. 3, 1 July 2008 (2008-07-01), pages 234 - 245, XP055055313, ISSN: 1476-8186, DOI: 10.1007/s11633-008-0234-5 *
M.R. LUO; G. CUI; B. RIGG: "The development of the cie 2000 colour-difference formula: Ciede2000", COLOR RESEARCH & APPLICATION, vol. 26, no. 5, 2001, pages 340 - 350, XP009006847, DOI: doi:10.1002/col.1049
MEHRDAD PANAHPOUR TEHRANI; AKIO ISHIKAWA; SHIGEYUKI SAKAZAWA; ATSUSHI KOIKE: "Iterative colour correction of multicamera systems using corresponding feature points", JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, vol. 21, no. 5-6, 2010, pages 377 - 391, XP027067816, DOI: doi:10.1016/j.jvcir.2010.03.007
PANAHPOUR TEHRANI M ET AL: "Iterative colour correction of multicamera systems using corresponding feature points", JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, ACADEMIC PRESS, INC, US, vol. 21, no. 5-6, 1 July 2010 (2010-07-01), pages 377 - 391, XP027067816, ISSN: 1047-3203, [retrieved on 20100601], DOI: 10.1016/J.JVCIR.2010.03.007 *
QI WANG ET AL: "Robust color correction in stereo vision", IMAGE PROCESSING (ICIP), 2011 18TH IEEE INTERNATIONAL CONFERENCE ON, IEEE, 11 September 2011 (2011-09-11), pages 965 - 968, XP032080658, ISBN: 978-1-4577-1304-0, DOI: 10.1109/ICIP.2011.6116722 *
W. XU; J. MULLIGAN: "Performance evaluation of color correction approaches for automatic multi-view image and video stitching", PROC. CVPR'10, 2010, pages 263 - 270, XP031726027
YOAV HACOHEN; ELI SHECHTMAN; DAN B GOLDMAN; DANI LISCHINSKI: "Non-rigid dense correspondence with applications for image enhancement", ACM TRANSACTIONS ON GRAPHICS, 2011

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015086530A1 (en) * 2013-12-10 2015-06-18 Thomson Licensing Method for compensating for color differences between different images of a same scene
US20160323563A1 (en) * 2013-12-10 2016-11-03 Thomson Licensing Method for compensating for color differences between different images of a same scene
EP3001668A1 (en) * 2014-09-24 2016-03-30 Thomson Licensing Method for compensating for color differences between different images of a same scene
US10262441B2 (en) 2015-02-18 2019-04-16 Qualcomm Incorporated Using features at multiple scales for color transfer in augmented reality
CN106650755A (en) * 2016-12-26 2017-05-10 哈尔滨工程大学 Feature extraction method based on color feature
CN109919899A (en) * 2017-12-13 2019-06-21 香港纺织及成衣研发中心有限公司 The method for evaluating quality of image based on multispectral imaging
EP3724850A4 (en) * 2017-12-13 2021-09-01 The Hong Kong Research Institute of Textiles and Apparel Limited Color quality assessment based on multispectral imaging

Similar Documents

Publication Publication Date Title
JP6438403B2 (en) Generation of depth maps from planar images based on combined depth cues
Han et al. Visible and infrared image registration in man-made environments employing hybrid visual features
Kordelas et al. Enhanced disparity estimation in stereo images
CN111435438A (en) Graphical fiducial mark recognition for augmented reality, virtual reality and robotics
Ravichandran et al. Video registration using dynamic textures
US20160165216A1 (en) Disparity search range determination for images from an image sensor array
US20130002810A1 (en) Outlier detection for colour mapping
WO2015086537A1 (en) Method for building a set of color correspondences from a set of feature correspondences in a set of corresponding images
WO2013164043A1 (en) Method and system for determining a color mapping model able to transform colors of a first view into colors of at least one second view
US9082019B2 (en) Method of establishing adjustable-block background model for detecting real-time image object
CN108491857B (en) Multi-camera target matching method with overlapped vision fields
Recky et al. Façade segmentation in a multi-view scenario
Minematsu et al. Adaptive background model registration for moving cameras
CN110120012B (en) Video stitching method for synchronous key frame extraction based on binocular camera
KR102665603B1 (en) Hardware disparity evaluation for stereo matching
Tian et al. Stitched image quality assessment based on local measurement errors and global statistical properties
Mukherjee et al. A hybrid algorithm for disparity calculation from sparse disparity estimates based on stereo vision
EP2698764A1 (en) Method of sampling colors of images of a video sequence, and application to color clustering
CN110472085B (en) Three-dimensional image searching method, system, computer device and storage medium
CN109785367B (en) Method and device for filtering foreign points in three-dimensional model tracking
Ran et al. Intrinsic color correction for stereo matching
Wang et al. A robust algorithm for color correction between two stereo images
Hasan et al. Optimization of sparse color correspondences for color mapping
US20210034915A1 (en) Method and apparatus for object re-identification
Oskarsson Robust image-to-image color transfer using optimal inlier maximization

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12810252

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12810252

Country of ref document: EP

Kind code of ref document: A1