WO2006062508A1 - Dynamic warp map generation system and method - Google Patents

Dynamic warp map generation system and method Download PDF

Info

Publication number
WO2006062508A1
WO2006062508A1 PCT/US2004/040851 US2004040851W WO2006062508A1 WO 2006062508 A1 WO2006062508 A1 WO 2006062508A1 US 2004040851 W US2004040851 W US 2004040851W WO 2006062508 A1 WO2006062508 A1 WO 2006062508A1
Authority
WO
WIPO (PCT)
Prior art keywords
hybrid
space
dimensional
distortion
parameters
Prior art date
Application number
PCT/US2004/040851
Other languages
French (fr)
Inventor
Zorawar S. Bassi
Louie Lee
Gregory Lionel Smith
Original Assignee
Silicon Optix Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Silicon Optix Inc. filed Critical Silicon Optix Inc.
Priority to JP2007545427A priority Critical patent/JP2008526055A/en
Priority to PCT/US2004/040851 priority patent/WO2006062508A1/en
Publication of WO2006062508A1 publication Critical patent/WO2006062508A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction

Definitions

  • This invention relates to electronic image warping, and more particularly to dynamic warp map generation in the case of varying distortions.
  • Two-dimensional digital image processing comprises transforming the digital content of an input image and producing output digital image data to produce an output image.
  • the image data suffer intrinsic and situational distortions.
  • Intrinsic distortions are those characteristics of the display system that do not change under different circumstances. Examples of these distortions are tangential and radial lens distortions, the most common case of which are pincushion and barrel distortions.
  • situational distortions depend on particular circumstances. Common examples of these distortions are horizontal and vertical keystone distortions. Some distortions could be considered intrinsic or situational depending on the particular display characteristics. For instance, geometrical distortions in a rear-projection television are fixed while they change dramatically in a projector, depending of geometry of the screen, e.g. flat or curved, angle of projection, etc.
  • mapping types such as rotations, linear scaling, affine, and perspective transforms as described in U.S. Patent No. 4,835,532 to Fant, U.S. Patent No.
  • Patent No. 6,097,855 to Levien is a patent No. 6,097,855 to Levien.
  • warp maps are needed for any given situation.
  • One way to handle dynamic warp map assignment is to generate a warp map each time the situation changes.
  • a more efficient way of dynamic warp assignment is to generate a set of warp maps offline and store these maps in a memory. When needed, one of these warp maps is called and used for distortion compensation according to a particular set of distortion parameters.
  • This technique requires substantial memory to store all the warp maps. Accordingly, this method becomes impractical in the case of too many parameters and configurations.
  • the choice of a warp map is limited to a given set of pre-determined distortion parameters, which could greatly compromise the quality of the output image.
  • the present invention provides in one aspect, an electronic system for dynamic m-dimensional digital data transformation, subject to varying N- dimensional distortion and control parameters, said system comprising: a. an input interface to obtain m-dimensional transformation data and N-dimensional distortion and control parameter sets, b. a concatenation stage coupled to said input interface, to concatenate m-dimensional transformation data and N- dimensional distortion and control parameter sets and to produce m+N dimensional hybrid grid data in a corresponding hybrid vector space, c. a divider coupled to said concatenation stage, to divide said hybrid vector space into a plurality of hybrid blocks, consisting of N-dimensional distortion control zones and m-dimensional geometry patches, based on a desired accuracy level, d.
  • a surface function estimator coupled to said divider, to parameterize said hybrid grid data, to generate a hypersurface map in each hybrid block, to estimate said hybrid grid data, wherein said hypersurfaces are represented by a plurality of compacted coefficients, e. a surface map interface, coupled to said surface function estimator, to store said compacted coefficients; f. a controller, to obtain instantaneous user and control parameters and to generate a vector in the hybrid vector space based on said user and control parameters, g. a decoder, coupled to said surface map interface and said controller, to dynamically compute instantaneous warp maps from said compacted coefficients corresponding to said vector in the hybrid vector space, and, h. an output interface to store and relay said instantaneous maps.
  • the present invention provides an electronic system for dynamic two-dimensional digital image transformation described by two- dimensional spatial grid data and varying geometric and optical distortions described by multidimensional parameter data sets, said system comprising: a. an input interface to obtain two-dimensional spatial grid data and multidimensional distortion parameter data sets representing varying geometric and optical distortions, b. a concatenation stage, coupled to said input interface, to concatenate the multidimensional distortion parameter data sets and the two-dimensional spatial transformation grid data, and to produce hybrid grid data in a corresponding hybrid vector space, c.
  • a divider coupled to said concatenation stage, to divide said hybrid vector space into a plurality of hybrid blocks, consisting of multidimensional distortion control zones and two-dimensional spatial geometry patches, based on a desired accuracy level
  • a surface function estimator coupled to said divider, to parameterize said hybrid grid data to generate a hypersurface map in each hybrid block, represented by a number of compacted coefficients, e. a warp interface, coupled to said surface functional estimator, to store said compacted coefficients representing the hypersurface maps
  • a controller to obtain instantaneous control parameters, including display parameters, distortion parameters, and user parameters, and to calculate hybrid space vectors from said control parameters, g.
  • a decoder coupled to said compacted warp interface and said controller, to dynamically compute instantaneous warp maps from said compacted coefficients corresponding to said hybrid space vectors, and, h. an output interface to store and relay said instantaneous warp maps.
  • the present invention provides an electronic method for dynamic m-dimensional digital data transformation, subject to varying N- dimensional distortion and control parameters, said method comprising: a. obtaining m-dimensional transformation data and N-dimensional distortion and control parameter sets, b. concatenating the m-dimensional transformation data and N- dimensional distortion and control parameter sets and producing m+N dimensional hybrid grid data in a corresponding hybrid vector space, c. dividing said hybrid vector space into a plurality of hybrid blocks, consisting of N-dimensional distortion control zones and (Tridimensional geometry patches, based on a desired accuracy level, d.
  • the invention provides an electronic method for dynamic two-dimensional digital image transformation described by two-dimensional spatial grid data and varying geometric and optical distortions described by multidimensional parameter data sets, said method comprising: a.
  • FIG. 1 is a block diagram of a dynamic warp system built in accordance with the present invention
  • FIG. 2A is a graphical representation of a three-dimensional control space division
  • FIG. 2B is a graphical representation of a two-dimensional coordinate space division
  • FIG. 3A is an illustration of a prior art method for generating a warp set with different warp maps from different grid maps
  • FIG. 3B is an illustration of the method of the present invention for generating different warp maps
  • FIG. 4 is a flow logic representation of an example of the present invention.
  • Dynamic warp system 100 comprises compactor 110 and decompactor 150.
  • Compactor 110 comprises input interface 112, concatenation stage 113, divider 114, surface function estimator 115, error analysis stage 116, and compacted warp interface 117.
  • Decompactor 150 comprises controller 152, decoder 153, and output interface 154.
  • optical and geometric distortions associated with a set of distortion parameters. Some of these parameters represent the geometric transformation specific to a display system. For instance, the shape and size of the display surface in a projection system are important geometric parameters.
  • a curved display surface for example, has a specific surface maps with associated parameters.
  • the spatial transformation is basically a two- dimensional (2D) transformation assigning (in the inverse formalism) input pixel coordinates to every output pixel.
  • the 2D spatial transformation is stated in the form of grid data values.
  • the grid data can be viewed as a "parameterization" of the spatial transformation, which, as described above, is in turn determined by the distortion parameters.
  • the spatial transformation assigns input coordinates to output coordinates. If (X, Y) represent the coordinates of a pixel, P, in the output space, and (U, V) represent the coordinates of the mapped pixel, P ⁇ in the input space, the spatial 2D transformation is represented by:
  • every output pixel coordinate is mapped onto an input pixel coordinate and this is called a grid data representation of the transformation.
  • Mapping the output pixel coordinates onto the input pixel coordinate space or so-called “inverse mapping”, has the advantage that there will be no output pixels unassigned and hence no "holes" in the output image. In this invention, inverse mapping is used.
  • Grid data description though very accurate, takes a substantial amount of hardware resources and, as such, is an inefficient way to implement a
  • any realistic display system there are geometric and optical distortions present. These distortions could be either constant or dynamically changing.
  • a set of distortion parameters is therefore associated with any particular setting. Vivid examples of varying distortion parameters are keystone distortion angles in a projection system. For a given distortion type, there could be one or many associated parameters. Therefore these parameters are determined as points or vectors in a multidimensional space.
  • Input interface 112 obtains 2D spatial transformation parameters (i.e. grid data), as well as multidimensional distortion parameters, for all the unique distortions of a given system in a given situation, and relays them to concatenation stage 113.
  • Any unique distortion is defined by the same types of parameters, albeit with different values, such as horizontal/vertical keystone angles. Hence the distortion parameters are referred to as distortions of the same type.
  • the notation used is to separate the hybrid space domain (X,Y,d, ⁇ , ⁇ ,...) from the range (U, V) , that is points in the hybrid space domain are mapped to points in the range.
  • the concatenation stage 113 performs this merging to obtain a hybrid grid dataset as explained in further detail below.
  • the distortion component of the hybrid space domain will be hereafter referred to as the "control space".
  • Each vector in the control space corresponds to a unique set of values for N distortion parameters, which identify a warp map for a 2D spatial transformation.
  • Each warp map is therefore identified by a unique ⁇ /-vector of control parameters, ⁇ g :
  • ⁇ / labels the number of distortion parameters (geometric and optical) and g indexes the specific warp.
  • labels the number of distortion parameters (geometric and optical) and g indexes the specific warp.
  • two of the control parameters a g ' correspond to keystone angles.
  • the vector notation ⁇ is used to refer to the N distortion parameters as a whole, without references to a specific warp.
  • Each warp map identified by a unique ⁇ ⁇ , has grid data associated with its spatial component. These grid data can be expressed as the relation:
  • M g gives the number of spatial points for warp g corresponding to ⁇ g .
  • Concatenation stage 113 combines the spatial data with the corresponding distortion data to form the hybrid relation for warp g:
  • This grid data relation maps points in the hybrid space domain, ( ⁇ g ,Xf ,Y 1 8 ) , to points in the input coordinate space (JJf ,V 1 8 ) .
  • concatenation stage 113 is adapted to reduce the number of grid points by sub-sampling. Sub-sampling is commonly achieved by dropping data points at regular intervals. This is necessary if the dataset is very large, which can considerably strain the fitting at the surface function estimator stage.
  • Divider 114 receives the concatenated parameters, or hybrid grid data from concatenation stage 113, and divides up the hybrid space domain (a, X, Y) into hybrid blocks.
  • Each hybrid block consists of a "control zone" and a spatial geometry patch.
  • the control space i.e. the distortion component of the hybrid space domain
  • divider 114 also divides the coordinate space into spatial geometry patches.
  • the control zone is a hyper volume delineated by control parameter intervals.
  • three control intervals 212, 214, and 216 are shown, delineating the shaded control zone 210.
  • a control parameter a' its domain P ⁇ n ⁇ a' ⁇ PJ 13x is divided into B 1 intervals with boundaries:
  • control intervals there are N such control intervals, one from each control parameter domain, delineating a control zone.
  • B B X B 2 - B N control zones.
  • Each control zone can be identified by its upper boundary indices.
  • Z c ( ⁇ ), ⁇ ( ⁇ ⁇ , ⁇ 2 ,..., ⁇ N ) gives the control zone with extent:
  • V( ⁇ ) The volume of control zone Z c ( ⁇ ) , denoted by V( ⁇ ) , is given by:
  • the zone division of the control space is based upon a trade-off between a desired accuracy in surface fitting and available hardware resources.
  • the control zones do not have to be of equal volume, and the boundary positions can be independently varied for optimization.
  • the division of the coordinate space into geometry patches proceeds in an analogous manner as shown in FIG. 2B.
  • the coordinate space, with coordinates (X, Y) has a domain size of [0,W] ⁇ [0,H] , that is:
  • the geometry patch division is also determined by a trade-off between desired accuracy in surface fitting and available hardware resources.
  • the geometry patches do not have to be the same size but, as implied by the equations, should fit together in a regular manner.
  • regular in this context is intended to mean that the geometry patches can be arranged into rows and/or columns. This is not restrictive since any division by rectangular geometry patches can be brought to a regular arrangement. Both regular and irregular geometry patch divisions are shown in FIG. 2B. It is shown there how to achieve a regular geometry patch scheme without sacrificing the level of subdivision.
  • division into control zones and geometry patches is an iterative procedure.
  • a small number of zones/patches say a single control zone and a single geometry patch, can be used.
  • a control zone together with a geometry patch defines a N + 2 dimensional hyper-cube in the hybrid parameter space, hence divider 114 can be viewed as dividing the hybrid space into hyper-cubes or "hybrid blocks”.
  • Surface function estimator 115 receives the hybrid grid dataset and the hybrid domain division information (i.e. arrangement of control zones and geometry patches) from divider 114. Surface function estimator 115 then fits the hybrid grid dataset with a single hypersurface in each hybrid block. Each input coordinate is fitted separately, giving two hypersurfaces, one for U , and one for V . In one example of the invention, each hybrid block is fitted independently. In another example implementation of the present invention, all the blocks are fitted together. The finer the division of the hybrid domain the more accurate the hypersurface fit. Each point on the fitted hypersurface represents a specific spatial coordinate in a specific warp map. The hypersurface is N + 2 dimensional.
  • Surface function estimator 115 transforms the hybrid data from a discrete gird data form to a closed functional form, substantially reducing the amount of parameters.
  • the method of fitting and basis functions used can be variable.
  • a linear least squares approach with a tensor product polynomial based basis (such as a tensor product Bezier basis or B-spline basis) is used.
  • the equations below reflect this linear and tensor product form, though the basis is kept general.
  • the data for the two hyper surfaces, after rearranging the indices, can be written as:
  • the surface basis functions are denoted by:
  • the hypersurfaces obtained are:
  • each hypersurface consists of many component surfaces, indicated by the superscripts kl ⁇ , which are the fitted surfaces restricted to the patch/zone Z P (k,l) ⁇ Z c ( ⁇ ).
  • u kfp simply means the hypersurface restricted to Z p (k,l) ⁇ Z c ( ⁇ ).
  • the a's and b's are the surface coefficients, that, along with the hybrid domain division information, define the surface:
  • surface function estimator 115 has, in essence, made the transformation from a large multidimensional dataset of points to a small number of surface coefficients.
  • the change in storage numbers is:
  • control indices 1 , and 2 correspond to the horizontal and vertical keystone angles.
  • the result is a total of 36864 coefficients, a substantial reduction in data.
  • Even a reduction in the number of coordinates to 17x13, (which will require some interpolation at a later stage to obtain the coordinates for the dropped points) gives a total of 1.1 million points, with the hypersurface giving a 30 times improvement.
  • error analysis stage 116 evaluates the hypersurfaces to obtain a set of computed coordinates in the input space.
  • the computed coordinates are generated at the domain points contained in the hybrid grid dataset, which error analysis stage 116 receives independently from concatenation stage 113.
  • (U ⁇ ,V 1 0 ) denote the computed coordinate at the hybrid domain point (X 1 J ⁇ a 1 ) , then these are generated using the following equations:
  • can be any appropriate distance function, the most commonly used norms being:
  • the error condition simply states that the computed coordinates should approximate the exact coordinates accurately. If the preset tolerance level condition is not satisfied (i.e. the distance is greater than the tolerance level), then error analysis stage 116 sends the results back to divider 114.
  • the results sent to divider 114 consist of a listing of the domain points at which the error was larger than the tolerance level. These domain points are referred to as broken points. Based on these results, divider 114 further subdivides the hybrid domain space. Only those blocks, where the errors are larger than the tolerance level are subdivided. As discussed above, the blocks where the errors are larger than the tolerance level can be determined from the listing of broken points.
  • Subdivision can occur at the control zone level, where only specific control intervals are divided, at the geometry patch level, where only specific spatial intervals are divided, or a combination of the two. These steps are repeated until the desired accuracy is obtained and tolerance levels are met.
  • the maximum error obtained can be reduced to an arbitrarily small value, implying that successive sub-divisions can eventually meet any tolerance level.
  • a practical limit is normally set by hardware limitations (each subdivision increases the amount of data storage) and application specific criteria. After a certain number of sub-divisions, the process becomes redundant, as the accuracy gained is small with no noticeable improvement to warping quality. In the extreme limit, every domain point sits on a block vertex, which essentially becomes a pixel-by-pixel description of the transformation at every control vector.
  • the complete hypersurface, after error conditions have been met, is defined by the surface coefficients, and the hybrid space division data, that is, the configuration of the control zones and geometry patches. This data is summarized below:
  • the hypersurface coefficients are referred to as "compacted coefficients".
  • Compacted warp interface 117 stores the hypersurface data (coefficients and division data) and relays them upon request to decompactor 150.
  • the function of compactor 110 is therefore preparing these compacted coefficients from a set of distortion and spatial transformation parameters, and storing these compacted warp coefficients in an interface in order to relay the coefficients upon request. This makes for easy, dynamic access to a great number of warp maps.
  • Warp maps are obtained by evaluating the hypersurface at specific control vectors, which is done by the decompactor 150.
  • FIG. 3A illustrates a typical prior art method to access numerous warp maps.
  • the method consists of acquiring grid maps 310 for warp map generation 320. Each generated warp map 330 is then ready to store and download at step 340.
  • the method involves storing a set of warp maps as 2D fitted surfaces. Storing the warp maps requires substantial hardware resources. To be efficient, a device based on this prior art method would require large memory interfaces. Moreover, the method would require rapid access to memories each time a warp map is needed "on the fly”. A large and fast memory device is costly.
  • the distortion parameter are restricted to certain discrete values and the accuracy of the warp map with respect to the actual distortion parameters depends on how close the actual values are to preset values. In contrast, FIG.
  • 3B illustrates the compactor-decompactor method for the method of the present invention.
  • individual grid maps 350 are acquired by grid set creator 360, which outputs hybrid grid set 362.
  • a warp set is generated at step 370, which outputs compacted coefficients 372.
  • These coefficients are stored at step 374, and upon request, are read out and decoded at step 380 to dynamically generate a specific warp map 382, which is downloaded at step 384.
  • a surface evaluation is conducted in each control zone and geometry patch to smooth out the differences.
  • the hypersurface is evaluated, or decoded, at a specific vector in control space to obtain a 2D surface that represents the original 2D spatial transformation for those specific control values, (i.e., distortion parameters).
  • the decoding of hypersurface to warp map is referred to as decompaction.
  • the 2D surface coefficients are the "warp coefficients", as they define the warp map.
  • the compacted coefficients can be viewed as a compression of the warp coefficients.
  • Controller 152 obtains the specific parameters including the desired setting and user parameters. It then translates these specific parameters into control parameters corresponding to a vector ⁇ o in the control space. Controller 152 then passes the control space vector to the decoder 153.
  • the control parameters might be the keystone angles and throw ratio for a particular projector setup.
  • Decoder 153 takes the control parameters and determines the control zone Z c ( ⁇ 0 ) to which they belong. Next, the appropriate component hypersurface is evaluated at ⁇ 0 to obtain a 2D surface u(x,y) , as defined below:
  • This "de-compaction" procedure allows dynamic generation of warp maps for any control vector. Once the hypersurface has been obtained, the decompactor only needs the control vector as input to generate the map.
  • warp maps are in functional form, i.e. a 2D surfaces rather than discrete grid data, they can easily be scaled (zoom or shrink) or flipped horizontally or vertically or both. Scaling and flipping operations are simple transformations of the above warp coefficients.
  • An example application consists of a display converter to generate a map to fit a particular display surface from a standard normalized set by performing scaling operations.
  • V 1 C V(X 1 J 1 )
  • FIG. 4 shows the logic flow diagram of dynamic warp system 100.
  • steps 401 and 402 spatial transformation and distortion parameters are obtained.
  • 2D spatial transformation parameters and multidimensional distortion parameters are concatenated by concatenation stage 113.
  • a hybrid grid dataset and an associated hybrid space are therefore formed with each vector representing both sets of parameters.
  • the distortion component of the hybrid space is referred to as control space.
  • divider 114 divides up the control space into control zones, each control zone determining a subset of control parameters. The division of the control space is based on the desired fitting accuracy and is limited by the availability of hardware resources.
  • the geometry space (coordinate space) is also divided into geometry patches.
  • the control zones and geometry patches together divide the hybrid space into blocks.
  • the data in each block (geometry patch + control zone) is surface fit.
  • the surface fitting process is adaptable to the degree of accuracy required.
  • the surfaces are parameterized as polynomials and the degree of the polynomial determines the accuracy of the fit.
  • an error analysis of the surface fit is performed. The error of the fit is determined by comparing a set of points calculated from the fit against exact values obtained from the hybrid grid dataset obtained at step 410. If the difference between the two sets of values is more than a pre- determined tolerance level, the results of the error analysis are sent back to step 420 for a finer division of the hybrid space, resulting in a better resolution.
  • the compacted surface coefficients are saved. At this point the function of compactor 110 is completed.
  • decompactor 150 functions dynamically.
  • dynamic control parameters are obtained. These parameters include multidimensional geometric and optical distortion parameters, as well as user parameters, in a user-friendly format. Any of these parameters may change dynamically.
  • These parameters are then used to construct a hybrid space vector.
  • a warp map is decoded from the compacted surface coefficients. This decoded warp map represents a transformation that compensates for all geometric and optical distortion parameters. Once the warp map is determined, at step 480, it is relayed to apply to actual pixel coordinates to determine input pixel coordinates.
  • dynamic warp system 100 is used for color gamut transformation.
  • the system handles transformation between two color spaces.
  • the input and output color spaces could have different properties. This is very much pronounced in a printer application when an object on a display screen is to be printed on paper or other media.
  • the printer has to transform the display color space onto the printer (e.g. CMY, color space).
  • CMY color space
  • such spaces might be of different dimensions (e.g., RGB to CMYK or even to higher dimension color spaces with higher number of primaries).
  • the color spaces typically have different ranges.
  • a printer color space is not an "additive space” but a "subtractive space”.
  • a ray of blue on a display screen creates the color blue.
  • a dot of blue on a white paper eliminates all other colors and only reflects blue, and hence the name subtractive.
  • the color gamut transformation in this case is nonlinear, relating different ranges, and very complicated.
  • dynamic warp system 100 is adapted to input a set of control parameters corresponding to varying conditions and a set of spatial parameters, describing the mapping of RGB color coordinates to CMY or other color coordinates, which taken together form the hybrid grid dataset.
  • Divider 114 divides the hybrid vector space into hybrid blocks. Each block consists of a distortion parameter control zone and a color space geometry patch.
  • the system fits a hypersurface, represented by a number of coefficients, to estimate the transformation grid data.
  • These resulting coefficients are stored in an interface and are available for use by a decoder 153 in decompactor 150 to construct a warp map or more appropriately a "color mapping that transforms output color space onto the input color space).
  • Dynamic warp system 100 generates a hybrid space from the color space and control parameters and divides it into hybrid blocks as explained above. In each hybrid block, the system fits the grid transformation data with a hypersurface. Upon selection of a particular setting, the system decodes a warp map from a hypersurface, transforming a color space vector corresponding to the factory standard settings onto any specific setting determined by the user.
  • the dynamic warp map generation allows for mapping to the limitation of the gamut range, and seamless and graceful mapping of the outer edges.
  • a color sensor is used to calibrate an aging printer or monitor. The calibration measurement results are fed back as control parameters. Controller
  • Decoder 152 constructs a hybrid space vector out of these measurement results.
  • the sensor calibration could also be used when printing on different types of paper as they yield different output colors.
  • a new warp map is generated for a new type of paper, yielding an optimal and seamless gamut transformation. On certain types of paper one could never get great printing results, however, this method assures improved printing quality.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Transforming Electric Information Into Light Information (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)
  • Image Processing (AREA)

Abstract

Dynamic warp map generation system(fig.1) and corresponding method are disclosed. A compactor (110,fig.1) obtains spatial transformation parameters along with geometric and optical distortion parameters and combines them to form hybrid grid data in a hybrid vector space. This vector space is divided (114, fig.1) into hybrid blocks. In each hybrid block, the grid dataset is fitted with a hypersurface and the surface coefficients are saved in an interface. A decompactor(150,fig.1) obtains dynamic control parameters representing varying distortion parameters and generates a hybrid space vector. According to the hybrid space vector, a warp map is decoded (153,fig.1) from the hypersurface coefficients that compensates for dynamic geometric and optical distortions. In another example of the present invention, the dynamic warp map generation system is used for color gamut transformation.

Description

Title: DYNAMIC WARP MAP GENERATION SYSTEM AND METHOD
FIELD OF THE INVENTION This invention relates to electronic image warping, and more particularly to dynamic warp map generation in the case of varying distortions.
BACKGROUND OF THE INVENTION
Two-dimensional digital image processing comprises transforming the digital content of an input image and producing output digital image data to produce an output image. In display systems, in such transformations, the image data suffer intrinsic and situational distortions. Intrinsic distortions are those characteristics of the display system that do not change under different circumstances. Examples of these distortions are tangential and radial lens distortions, the most common case of which are pincushion and barrel distortions.
On the other hand, situational distortions depend on particular circumstances. Common examples of these distortions are horizontal and vertical keystone distortions. Some distortions could be considered intrinsic or situational depending on the particular display characteristics. For instance, geometrical distortions in a rear-projection television are fixed while they change dramatically in a projector, depending of geometry of the screen, e.g. flat or curved, angle of projection, etc.
Electronic image warping is commonly used to compensate for geometric and optical distortions. A discussion of image warping can be found in George Wolberg's "Digital Image Warping", IEEE Computer Society Press, 1988. Many electronic image distortion or transformation algorithms are designed with the primary goal to simplify the hardware implementation. This objective often restricts the complexity and flexibility of the spatial transformation. United States Patent No. 4,472,732 to Bennett et. al., discloses a method well suited for hardware implementations of real-time image processing systems and decomposes a 2D map into a series of 1 D maps, which require only 1 D filtering or re-sampling. United States patent Nos. 5,175,808 to Sayre and 5,204,944 to Wolberg et al., disclose methods based on a pixel-by-pixel description. Both approaches are restrictive as not all warps can be separated into 1 D warps and a pixel-by-pixel description is not suited for varying distortions, in addition to being very expensive to implement.
Other algorithms for spatial transformations are limited to certain mapping types, such as rotations, linear scaling, affine, and perspective transforms as described in U.S. Patent No. 4,835,532 to Fant, U.S. Patent No.
4,975,976 to Kimata et al., U.S. Patent No. 5,808,623 to Hamburg, and U.S.
Patent No. 6,097,855 to Levien.
In the case of varying distortion parameters, different warp maps are needed for any given situation. One way to handle dynamic warp map assignment is to generate a warp map each time the situation changes.
Obviously this method is not efficient, and in the case of real time video applications, it is not practical.
A more efficient way of dynamic warp assignment is to generate a set of warp maps offline and store these maps in a memory. When needed, one of these warp maps is called and used for distortion compensation according to a particular set of distortion parameters. This technique requires substantial memory to store all the warp maps. Accordingly, this method becomes impractical in the case of too many parameters and configurations. Furthermore, the choice of a warp map is limited to a given set of pre-determined distortion parameters, which could greatly compromise the quality of the output image.
It is therefore necessary to devise a dynamic warp generation scheme which is efficient from a hardware implementation point of view, is flexible in terms of the types of distortions allowable, and is able to render high quality output images. SUMMARY OF THE INVENTION
The present invention provides in one aspect, an electronic system for dynamic m-dimensional digital data transformation, subject to varying N- dimensional distortion and control parameters, said system comprising: a. an input interface to obtain m-dimensional transformation data and N-dimensional distortion and control parameter sets, b. a concatenation stage coupled to said input interface, to concatenate m-dimensional transformation data and N- dimensional distortion and control parameter sets and to produce m+N dimensional hybrid grid data in a corresponding hybrid vector space, c. a divider coupled to said concatenation stage, to divide said hybrid vector space into a plurality of hybrid blocks, consisting of N-dimensional distortion control zones and m-dimensional geometry patches, based on a desired accuracy level, d. a surface function estimator, coupled to said divider, to parameterize said hybrid grid data, to generate a hypersurface map in each hybrid block, to estimate said hybrid grid data, wherein said hypersurfaces are represented by a plurality of compacted coefficients, e. a surface map interface, coupled to said surface function estimator, to store said compacted coefficients; f. a controller, to obtain instantaneous user and control parameters and to generate a vector in the hybrid vector space based on said user and control parameters, g. a decoder, coupled to said surface map interface and said controller, to dynamically compute instantaneous warp maps from said compacted coefficients corresponding to said vector in the hybrid vector space, and, h. an output interface to store and relay said instantaneous maps. In another aspect, the present invention provides an electronic system for dynamic two-dimensional digital image transformation described by two- dimensional spatial grid data and varying geometric and optical distortions described by multidimensional parameter data sets, said system comprising: a. an input interface to obtain two-dimensional spatial grid data and multidimensional distortion parameter data sets representing varying geometric and optical distortions, b. a concatenation stage, coupled to said input interface, to concatenate the multidimensional distortion parameter data sets and the two-dimensional spatial transformation grid data, and to produce hybrid grid data in a corresponding hybrid vector space, c. a divider, coupled to said concatenation stage, to divide said hybrid vector space into a plurality of hybrid blocks, consisting of multidimensional distortion control zones and two-dimensional spatial geometry patches, based on a desired accuracy level, d. a surface function estimator, coupled to said divider, to parameterize said hybrid grid data to generate a hypersurface map in each hybrid block, represented by a number of compacted coefficients, e. a warp interface, coupled to said surface functional estimator, to store said compacted coefficients representing the hypersurface maps, f. a controller to obtain instantaneous control parameters, including display parameters, distortion parameters, and user parameters, and to calculate hybrid space vectors from said control parameters, g. a decoder, coupled to said compacted warp interface and said controller, to dynamically compute instantaneous warp maps from said compacted coefficients corresponding to said hybrid space vectors, and, h. an output interface to store and relay said instantaneous warp maps.
In another aspect, the present invention provides an electronic method for dynamic m-dimensional digital data transformation, subject to varying N- dimensional distortion and control parameters, said method comprising: a. obtaining m-dimensional transformation data and N-dimensional distortion and control parameter sets, b. concatenating the m-dimensional transformation data and N- dimensional distortion and control parameter sets and producing m+N dimensional hybrid grid data in a corresponding hybrid vector space, c. dividing said hybrid vector space into a plurality of hybrid blocks, consisting of N-dimensional distortion control zones and (Tridimensional geometry patches, based on a desired accuracy level, d. parameterizing said hybrid grid data to generate a hypersurface map in each hybrid block to estimate said hybrid grid data, wherein said hypersurfaces are represented by a plurality of compacted coefficients, e. storing said compacted coefficients, f. obtaining instantaneous user and control parameters and generating a vector in the hybrid vector space based on said user and control parameters, g. dynamically computing instantaneous warp maps from said compacted coefficients corresponding to said vectors in the hybrid vector space obtained in T1 and, h. storing and relaying said instantaneous warp maps. In another aspect the invention provides an electronic method for dynamic two-dimensional digital image transformation described by two-dimensional spatial grid data and varying geometric and optical distortions described by multidimensional parameter data sets, said method comprising: a. obtaining the two-dimensional spatial grid data and multidimensional distortion parameter data sets representing varying geometric and optical distortions, b. concatenating the multidimensional distortion parameter data sets and the two-dimensional spatial grid data, to produce hybrid grid data in a corresponding hybrid vector space, c. dividing said hybrid vector space into a plurality of hybrid blocks, consisting of multidimensional distortion control zones and two- dimensional spatial geometry patches, based on a desired accuracy level and, d. parameterizing said hybrid grid data to generate a hypersurface in each hybrid block, wherein the hypersurface estimates the hybrid grid data, and wherein the hypersurface is represented by a number of compacted coefficients, e. storing said compacted coefficients representing the hypersurface maps, f. obtaining instantaneous control parameters, including display parameters, distortion parameters, and user parameters, and calculating hybrid space vectors based on said control parameters, g. dynamically computing instantaneous warp maps from said compacted coefficients corresponding to said hybrid space vectors obtained in T, and, h. storing and relaying said instantaneous warp maps. Further details of different aspects and advantages of the embodiments of the invention will be revealed in the following description along with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
In the accompanying drawings:
FIG. 1 is a block diagram of a dynamic warp system built in accordance with the present invention;
FIG. 2A is a graphical representation of a three-dimensional control space division;
FIG. 2B is a graphical representation of a two-dimensional coordinate space division;
FIG. 3A is an illustration of a prior art method for generating a warp set with different warp maps from different grid maps; FIG. 3B is an illustration of the method of the present invention for generating different warp maps; and,
FIG. 4 is a flow logic representation of an example of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
Reference is first made to FIG. 1 that illustrates an example of a dynamic warp system 100 made in accordance with the present invention. Dynamic warp system 100 comprises compactor 110 and decompactor 150. Compactor 110 comprises input interface 112, concatenation stage 113, divider 114, surface function estimator 115, error analysis stage 116, and compacted warp interface 117. Decompactor 150 comprises controller 152, decoder 153, and output interface 154. In any display system there are optical and geometric distortions associated with a set of distortion parameters. Some of these parameters represent the geometric transformation specific to a display system. For instance, the shape and size of the display surface in a projection system are important geometric parameters. A curved display surface, for example, has a specific surface maps with associated parameters. Other geometric parameters may define situational distortions. For example, the horizontal and vertical keystone angles associated with keystone distortions. Similarly, optical distortions (e.g. tangential and radial lens imperfections, lens offset, etc.), could also be characterized by specific optical parameters. Each unique combination of distortion parameters defines a unique spatial transformation, which will warp the image accordingly.
The spatial transformation, or warp map, is basically a two- dimensional (2D) transformation assigning (in the inverse formalism) input pixel coordinates to every output pixel. The 2D spatial transformation is stated in the form of grid data values. The grid data can be viewed as a "parameterization" of the spatial transformation, which, as described above, is in turn determined by the distortion parameters. The spatial transformation assigns input coordinates to output coordinates. If (X, Y) represent the coordinates of a pixel, P, in the output space, and (U, V) represent the coordinates of the mapped pixel, P\ in the input space, the spatial 2D transformation is represented by:
(X, Y) → (U, V)
If there is a relation like this for every output pixel, then every output pixel coordinate is mapped onto an input pixel coordinate and this is called a grid data representation of the transformation. Mapping the output pixel coordinates onto the input pixel coordinate space, or so-called "inverse mapping", has the advantage that there will be no output pixels unassigned and hence no "holes" in the output image. In this invention, inverse mapping is used.
In general, the grid data description can be summarized by the following formal equations: U = F11(XJ) V = FV(XJ)
These equations state that a pixel in the output image, with coordinates (X, Y) , is mapped via the spatial transformation F = (FU,FV) , to a pixel in the input image, with coordinates (U, V) . Accordingly, the grid data relation [(UnV1)^X1J1)) , specifies a set of pixels [(X1J1)) in the output space, which are inversely mapped onto the pixels [(UnV1)) in the input space. This is equivalent to a pixel- by-pixel description for the set of pixels [(X1J1)) .
Grid data description, though very accurate, takes a substantial amount of hardware resources and, as such, is an inefficient way to implement a
2D transformation electronically. Hence, in electronic image processing, surface fitting, which applies to both single-pass and two-pass methods, is the preferred way of describing a 2D spatial transformation. Surface fitting gives an accurate description that simultaneously incorporates the geometric behavior in both directions.
As mentioned above, in any realistic display system, there are geometric and optical distortions present. These distortions could be either constant or dynamically changing. A set of distortion parameters is therefore associated with any particular setting. Vivid examples of varying distortion parameters are keystone distortion angles in a projection system. For a given distortion type, there could be one or many associated parameters. Therefore these parameters are determined as points or vectors in a multidimensional space.
Input interface 112 obtains 2D spatial transformation parameters ( i.e. grid data), as well as multidimensional distortion parameters, for all the unique distortions of a given system in a given situation, and relays them to concatenation stage 113. Any unique distortion is defined by the same types of parameters, albeit with different values, such as horizontal/vertical keystone angles. Hence the distortion parameters are referred to as distortions of the same type. The function of the concatenation stage 113 is to merge the two vector spaces of 2D spatial grid data and multidimensional distortion parameters and to produce a combined vector space for the hybrid parameters, (i.e. hybrid = spatial + distortion parameters). These data form a new hybrid parameter space. For example if (X, Y) represent output pixel coordinates, (U, V) input pixels, and (d,θ,φ,...) represent a set of distortion parameters including the lens offset, vertical and horizontal keystone angles, the combined hybrid parameter space is denoted by: (U,V)χ(X,Y,d,θ,φ,...) .
The notation used is to separate the hybrid space domain (X,Y,d,θ,φ,...) from the range (U, V) , that is points in the hybrid space domain are mapped to points in the range. The concatenation stage 113 performs this merging to obtain a hybrid grid dataset as explained in further detail below. The distortion component of the hybrid space domain will be hereafter referred to as the "control space". Each vector in the control space corresponds to a unique set of values for N distortion parameters, which identify a warp map for a 2D spatial transformation. Each warp map is therefore identified by a unique Λ/-vector of control parameters, άg :
Figure imgf000012_0001
where Λ/ labels the number of distortion parameters (geometric and optical) and g indexes the specific warp. In the case of keystone distortions, two of the control parameters ag' correspond to keystone angles. The vector notation ά is used to refer to the N distortion parameters as a whole, without references to a specific warp.
For a total of G different warp maps, g = 1, ...,G, the set of control vectors are identified as: Sδ = {Sι tά2,...fάG} .
Each warp map, identified by a unique άσ , has grid data associated with its spatial component. These grid data can be expressed as the relation:
{(Uf , Vf UXf , Y,8)},i = l,..,Mg ,
where, Mg gives the number of spatial points for warp g corresponding to άg .
Concatenation stage 113 combines the spatial data with the corresponding distortion data to form the hybrid relation for warp g:
{(Uf ,V8),(άg,Xf ,Y8)},i = l,...,Mg
Lastly, taking the hybrid relations for all warps gives the hybrid grid data as:
{(Uf ,V8),(ag,Xf ,Y,g)},i = \,...,Mg,g = \,...,G .
This grid data relation maps points in the hybrid space domain, (άg,Xf ,Y1 8) , to points in the input coordinate space (JJf ,V1 8) . In one example of the invention, concatenation stage 113 is adapted to reduce the number of grid points by sub-sampling. Sub-sampling is commonly achieved by dropping data points at regular intervals. This is necessary if the dataset is very large, which can considerably strain the fitting at the surface function estimator stage.
Divider 114 receives the concatenated parameters, or hybrid grid data from concatenation stage 113, and divides up the hybrid space domain (a, X, Y) into hybrid blocks. Each hybrid block consists of a "control zone" and a spatial geometry patch. The control space (i.e. the distortion component of the hybrid space domain) is divided into control zones 210 as shown in FIG. 2A. In addition, divider 114 also divides the coordinate space into spatial geometry patches. In each hybrid block, the control zone is a hyper volume delineated by control parameter intervals. In FIG. 2A, three control intervals 212, 214, and 216 are shown, delineating the shaded control zone 210. In general, for a control parameter a' , its domain P^n < a' ≤ PJ13x is divided into B1 intervals with boundaries:
Figure imgf000014_0001
> -
For N control parameters, there are N such control intervals, one from each control parameter domain, delineating a control zone. There are a total of B = BXB2 - BN control zones. Each control zone can be identified by its upper boundary indices. For example, Zc(β),β = (βι2,...,βN) gives the control zone with extent:
Figure imgf000014_0002
The volume of control zone Zc(β) , denoted by V(β) , is given by:
Figure imgf000014_0003
-p
The zone division of the control space is based upon a trade-off between a desired accuracy in surface fitting and available hardware resources. The more control zones, the greater the accuracy, however, the more computation power and storage space required. The control zones do not have to be of equal volume, and the boundary positions can be independently varied for optimization.
The division of the coordinate space into geometry patches proceeds in an analogous manner as shown in FIG. 2B. The coordinate space, with coordinates (X, Y) , has a domain size of [0,W] χ[0,H] , that is:
O ≤ X ≤ W O ≤ Y ≤ H This domain is divided into K spatial intervals horizontally (in X) and L spatial intervals vertically (in Y). This gives a total of KL patches, in an Lx K rectangular array type arrangement as in FIG 2B. The patch boundaries are given by:
Sp, = {PZ = Q,Px\...,P* = W)
Spv = {Po y = O,Pl y,..., Pl = H]
Each geometry patch can be identified by its upper boundary indices, hence ZP(k,l) gives the patch with extent:
Figure imgf000015_0001
k = \,...,K / = 1,...Z
The geometry patch division is also determined by a trade-off between desired accuracy in surface fitting and available hardware resources.
Complex warps will require more geometry patches, hence more computational power and storage space, to achieve the same level of accuracy. In principle, a geometry patch can be assigned to every set of four adjacent pixel points, however, this would require a large amount of memory to store the spatial transformation. The basis functions can also be varied to optimize the trade-off.
The geometry patches do not have to be the same size but, as implied by the equations, should fit together in a regular manner. The term "regular" in this context is intended to mean that the geometry patches can be arranged into rows and/or columns. This is not restrictive since any division by rectangular geometry patches can be brought to a regular arrangement. Both regular and irregular geometry patch divisions are shown in FIG. 2B. It is shown there how to achieve a regular geometry patch scheme without sacrificing the level of subdivision.
As can be seen from FIG. 1, division into control zones and geometry patches is an iterative procedure. On the first iteration, a small number of zones/patches, say a single control zone and a single geometry patch, can be used. A control zone together with a geometry patch, defines a N + 2 dimensional hyper-cube in the hybrid parameter space, hence divider 114 can be viewed as dividing the hybrid space into hyper-cubes or "hybrid blocks".
Surface function estimator 115, receives the hybrid grid dataset and the hybrid domain division information (i.e. arrangement of control zones and geometry patches) from divider 114. Surface function estimator 115 then fits the hybrid grid dataset with a single hypersurface in each hybrid block. Each input coordinate is fitted separately, giving two hypersurfaces, one for U , and one for V . In one example of the invention, each hybrid block is fitted independently. In another example implementation of the present invention, all the blocks are fitted together. The finer the division of the hybrid domain the more accurate the hypersurface fit. Each point on the fitted hypersurface represents a specific spatial coordinate in a specific warp map. The hypersurface is N + 2 dimensional. Surface function estimator 115 transforms the hybrid data from a discrete gird data form to a closed functional form, substantially reducing the amount of parameters. The method of fitting and basis functions used can be variable. In one example of the present invention, a linear least squares approach with a tensor product polynomial based basis (such as a tensor product Bezier basis or B-spline basis) is used. The equations below reflect this linear and tensor product form, though the basis is kept general. The data for the two hyper surfaces, after rearranging the indices, can be written as:
{U, ^ (Xl,Y,,a>,a?,...,a?)},i = \,...,GxMg {Vl <^ (Xl,Y,,a],af,...,a!i)},i = l,...,GχMg
The surface basis functions are denoted by:
F' (x), z = 0,...,/ FJ(y), j = 0,...,J
Fγ" (an), γn = Q,...,Yn, n = l,..., N The basis dimensions are given by / + l,J + l,rn+l for, X, Y and a" respectively. For the standard polynomial basis, the basis functions are:
F1 (X) = X'
FJ(y) = yJ Fr"(a") = (a"Y"
Making the surface fit, the hypersurfaces obtained are:
Figure imgf000017_0001
/£, ≤x≤Pk x, P1I1 <y≤P,y, Ppn_x <ccn< /£ , (patch/zone boundaries) k = \,...,K, I = 1, ... L, βn=\,...,Bn, (patch/zone indices) n = l,...,N (control parameter index)
In the above, lower case letters, x, y, u, v, have been used for the coordinate parameters in the functional form. Each hypersurface, consists of many component surfaces, indicated by the superscripts klβ , which are the fitted surfaces restricted to the patch/zone ZP(k,l)χZc(β). Thus, ukfp simply means the hypersurface restricted to Zp(k,l)χZc(β). The a's and b's are the surface coefficients, that, along with the hybrid domain division information, define the surface:
IJYVYN uklβvΦn i = 0,...,/, j = 0,...,J, γn=0,...,Tn k = \,...,K, l = \,...L, βn=\,...,Bn n = \,...,N
At this stage, surface function estimator 115 has, in essence, made the transformation from a large multidimensional dataset of points to a small number of surface coefficients. For the common case of having the same number of spatial points for each warp (M g independent of g ), the change in storage numbers is:
MxG ^ Kx Lx(I + l)x (J + l)x Y[Bn X π(r « + 1) - n=l, ,N n=\, ,N
Though deceiving, this is a substantial reduction of data. As an example, consider a typical hybrid grid dataset description for warping a VGA image of 640x480 points ( = M ) for a total of 81x61 (= G ) keystone positions (horizontal angles from -40 to +40 degrees in 1 degree steps and vertical angles from -30 to +30 degrees in 1 degree steps). It should be understood that in a grid description, all pixels must have their data given. This corresponds to a total of approximately 1.5 billion data points using the grid (pixel-by-pixel) approach. With the surface description, reasonable accuracy (<= 2 pixel error) can be obtained with K = 2,L = 2,I = 3,J = 3,BX = 6,B2= 6,T1 = 3,r2 = 3. Here the control indices 1 , and 2 correspond to the horizontal and vertical keystone angles. The result is a total of 36864 coefficients, a substantial reduction in data. Even a reduction in the number of coordinates to 17x13, (which will require some interpolation at a later stage to obtain the coordinates for the dropped points) gives a total of 1.1 million points, with the hypersurface giving a 30 times improvement.
Once the hypersurfaces are obtained, error analysis stage 116 evaluates the hypersurfaces to obtain a set of computed coordinates in the input space. The computed coordinates are generated at the domain points contained in the hybrid grid dataset, which error analysis stage 116 receives independently from concatenation stage 113. Let (U^ ,V1 0) , denote the computed coordinate at the hybrid domain point (X1J^a1) , then these are generated using the following equations:
Figure imgf000018_0001
V1^ v(X1J1A1) In the above, the klβ superscripts have been removed and the notation indicates the full hypersurfaces, which include all the component surfaces. During evaluation of these equations, the component surface corresponding to the block to which (X1, Y a1) belong is selected. For example, if (X1JnU1) lie in ZP(k,l)χZc(β) , then u = u . Error analysis stage 116 compares these computed values with the exact coordinates (UnV1) taken from the hybrid grid dataset. It then determines whether a preset error tolerance level condition has been satisfied. This condition takes the form:
Figure imgf000019_0001
Here the maximum allowable error is Emax , and the norm ||-|| can be any appropriate distance function, the most commonly used norms being:
Figure imgf000019_0002
U, - U? V - V'
It is also possible to set independent tolerances levels for each coordinate, for example:
Figure imgf000019_0003
v -vc ≤ E
The error condition simply states that the computed coordinates should approximate the exact coordinates accurately. If the preset tolerance level condition is not satisfied (i.e. the distance is greater than the tolerance level), then error analysis stage 116 sends the results back to divider 114. The results sent to divider 114 consist of a listing of the domain points at which the error was larger than the tolerance level. These domain points are referred to as broken points. Based on these results, divider 114 further subdivides the hybrid domain space. Only those blocks, where the errors are larger than the tolerance level are subdivided. As discussed above, the blocks where the errors are larger than the tolerance level can be determined from the listing of broken points. Subdivision can occur at the control zone level, where only specific control intervals are divided, at the geometry patch level, where only specific spatial intervals are divided, or a combination of the two. These steps are repeated until the desired accuracy is obtained and tolerance levels are met. By increasing the number of geometry patches and/or control zones, the maximum error obtained can be reduced to an arbitrarily small value, implying that successive sub-divisions can eventually meet any tolerance level. However, a practical limit is normally set by hardware limitations (each subdivision increases the amount of data storage) and application specific criteria. After a certain number of sub-divisions, the process becomes redundant, as the accuracy gained is small with no noticeable improvement to warping quality. In the extreme limit, every domain point sits on a block vertex, which essentially becomes a pixel-by-pixel description of the transformation at every control vector.
The complete hypersurface, after error conditions have been met, is defined by the surface coefficients, and the hybrid space division data, that is, the configuration of the control zones and geometry patches. This data is summarized below:
Sp, = {P0 x = 0,Pf,..., P^ = W) Spy = {Po y = O,P/,...,P,! = H}
Figure imgf000020_0001
/ = 0,... , 1, j = 0,. .. , J, = 0, ... ,r
,κ, I = 1,- ..L, βn = 1,..., Bn n = \,... ,N
The hypersurface coefficients are referred to as "compacted coefficients". By obtaining the hypersurface, the many warp maps, described via the grid dataset, have been replaced by a small set of compacted coefficients. Compacted warp interface 117 stores the hypersurface data (coefficients and division data) and relays them upon request to decompactor 150. The function of compactor 110 is therefore preparing these compacted coefficients from a set of distortion and spatial transformation parameters, and storing these compacted warp coefficients in an interface in order to relay the coefficients upon request. This makes for easy, dynamic access to a great number of warp maps. Warp maps are obtained by evaluating the hypersurface at specific control vectors, which is done by the decompactor 150.
FIG. 3A illustrates a typical prior art method to access numerous warp maps. The method consists of acquiring grid maps 310 for warp map generation 320. Each generated warp map 330 is then ready to store and download at step 340. Specifically, the method involves storing a set of warp maps as 2D fitted surfaces. Storing the warp maps requires substantial hardware resources. To be efficient, a device based on this prior art method would require large memory interfaces. Moreover, the method would require rapid access to memories each time a warp map is needed "on the fly". A large and fast memory device is costly. Furthermore, the distortion parameter are restricted to certain discrete values and the accuracy of the warp map with respect to the actual distortion parameters depends on how close the actual values are to preset values. In contrast, FIG. 3B illustrates the compactor-decompactor method for the method of the present invention. In this method, individual grid maps 350 are acquired by grid set creator 360, which outputs hybrid grid set 362. A warp set is generated at step 370, which outputs compacted coefficients 372. These coefficients are stored at step 374, and upon request, are read out and decoded at step 380 to dynamically generate a specific warp map 382, which is downloaded at step 384. , In the present method of decompacting, a surface evaluation is conducted in each control zone and geometry patch to smooth out the differences.
In the decompactor subsystem 150, the hypersurface is evaluated, or decoded, at a specific vector in control space to obtain a 2D surface that represents the original 2D spatial transformation for those specific control values, (i.e., distortion parameters). The decoding of hypersurface to warp map is referred to as decompaction. The 2D surface coefficients are the "warp coefficients", as they define the warp map. The compacted coefficients can be viewed as a compression of the warp coefficients. After the decoding the 2D surface can be evaluated at a specific vector in the spatial space, which is simply a grid point (X ,,Y1) , to obtain the mapped grid point (U1, V1) , hence recovering the original grid data description {(U ,,VX(X ,,Y1)) . This process is detailed below. Controller 152 obtains the specific parameters including the desired setting and user parameters. It then translates these specific parameters into control parameters corresponding to a vector άo
Figure imgf000022_0001
in the control space. Controller 152 then passes the control space vector to the decoder 153. As an example, the control parameters might be the keystone angles and throw ratio for a particular projector setup.
Decoder 153 takes the control parameters and determines the control zone Zc0) to which they belong. Next, the appropriate component hypersurface is evaluated at ά0 to obtain a 2D surface u(x,y) , as defined below:
u(x,y) = {ukl(x,y)) ukl(x,y) = ∑∑aϊ!F'(x)FJ(y) ι=0 J=O
Y1=O γN=0
/£, ≤ x ≤ Pk x, Ph ≤ y ≤ P,y, (patch boundaries) k = \,..., K, l = \,...L, (patch indices)
Similar equations describe the 2D surface v(x,y) . In evaluating the hypersurface, a reduction from a N+2 dimensional surface to a 2D surface has been made. In particular there is no control space dependence in u(x,y) , as it is defined entirely on the (X, Y) coordinate space. The 2D surfaces u(x,y) and v(x,y) represent the original 2D spatial transformation, or the warp map, at control parameters ά0. The warp coefficients are:
K1
/ = 0,.. ■J, J = 0,. ..,J k = \,.. .,K, I = 1,- ..L
This "de-compaction" procedure allows dynamic generation of warp maps for any control vector. Once the hypersurface has been obtained, the decompactor only needs the control vector as input to generate the map.
Furthermore, since the warp maps are in functional form, i.e. a 2D surfaces rather than discrete grid data, they can easily be scaled (zoom or shrink) or flipped horizontally or vertically or both. Scaling and flipping operations are simple transformations of the above warp coefficients. An example application consists of a display converter to generate a map to fit a particular display surface from a standard normalized set by performing scaling operations.
Once a map is obtained, it can be evaluated at the output space 'coordinates (XnY1) , recovering the original grid data for the 2D spatial transformation associated with distortion ά0. The evaluation of u(x,y) , will give computed coordinates (U^, Vt c) (note the index i(= \,...,M0) here is over the coordinate space only rather than the entire hybrid domain):
Figure imgf000023_0001
V1 C = V(X1 J1)
During evaluation of these equations, the 2D component surface corresponding to the geometry patch to which (X1J1) belong is selected. For example, if (X1J1) lie in Zp(k,l) , then u = uM . Due to the error reduction performed by the error analysis stage, the computed coordinates will match the original coordinates within the tolerance level: within tolerence level
Figure imgf000024_0001
Therefore, {(Uf ,Vf)XX1J1)) reproduces the starting grid relation {(U ,,V1X(XnY1)] for the requested transformation.
FIG. 4 shows the logic flow diagram of dynamic warp system 100. At steps 401 and 402, spatial transformation and distortion parameters are obtained. At step 410, 2D spatial transformation parameters and multidimensional distortion parameters are concatenated by concatenation stage 113. A hybrid grid dataset and an associated hybrid space are therefore formed with each vector representing both sets of parameters. The distortion component of the hybrid space is referred to as control space. At step 420, divider 114 divides up the control space into control zones, each control zone determining a subset of control parameters. The division of the control space is based on the desired fitting accuracy and is limited by the availability of hardware resources. In addition, the geometry space (coordinate space) is also divided into geometry patches.
The control zones and geometry patches together divide the hybrid space into blocks. At step 430, the data in each block (geometry patch + control zone) is surface fit. The surface fitting process is adaptable to the degree of accuracy required. In one example of the invention, the surfaces are parameterized as polynomials and the degree of the polynomial determines the accuracy of the fit. At step 440, an error analysis of the surface fit is performed. The error of the fit is determined by comparing a set of points calculated from the fit against exact values obtained from the hybrid grid dataset obtained at step 410. If the difference between the two sets of values is more than a pre- determined tolerance level, the results of the error analysis are sent back to step 420 for a finer division of the hybrid space, resulting in a better resolution. Once the tolerance level is met, at step 450, the compacted surface coefficients are saved. At this point the function of compactor 110 is completed. Once the compacted surface coefficients are determined, decompactor 150 functions dynamically. At step 460, dynamic control parameters are obtained. These parameters include multidimensional geometric and optical distortion parameters, as well as user parameters, in a user-friendly format. Any of these parameters may change dynamically. These parameters are then used to construct a hybrid space vector. At step 470, based on the constructed hybrid space vector, a warp map is decoded from the compacted surface coefficients. This decoded warp map represents a transformation that compensates for all geometric and optical distortion parameters. Once the warp map is determined, at step 480, it is relayed to apply to actual pixel coordinates to determine input pixel coordinates.
In another embodiment of the present invention, dynamic warp system 100 is used for color gamut transformation. In this example, the system handles transformation between two color spaces. The input and output color spaces could have different properties. This is very much pronounced in a printer application when an object on a display screen is to be printed on paper or other media. When printing an image from an RGB display onto paper, the printer has to transform the display color space onto the printer (e.g. CMY, color space). In addition, such spaces might be of different dimensions (e.g., RGB to CMYK or even to higher dimension color spaces with higher number of primaries). Moreover, the color spaces typically have different ranges. Unlike the RGB color space of a display screen, a printer color space is not an "additive space" but a "subtractive space". For instance, a ray of blue on a display screen creates the color blue. However a dot of blue on a white paper eliminates all other colors and only reflects blue, and hence the name subtractive. The color gamut transformation in this case is nonlinear, relating different ranges, and very complicated. Moreover, in a printer, there are varying factors like aging, paper type, user parameters, and drift.
In one example of the present invention, dynamic warp system 100 is adapted to input a set of control parameters corresponding to varying conditions and a set of spatial parameters, describing the mapping of RGB color coordinates to CMY or other color coordinates, which taken together form the hybrid grid dataset. There is color space grid data and distortion and control space grid data. These data are concatenated by concatenation stage 113 to produce a hybrid vector space. Divider 114 divides the hybrid vector space into hybrid blocks. Each block consists of a distortion parameter control zone and a color space geometry patch. In each hybrid block, the system fits a hypersurface, represented by a number of coefficients, to estimate the transformation grid data. These resulting coefficients are stored in an interface and are available for use by a decoder 153 in decompactor 150 to construct a warp map or more appropriately a "color mapping that transforms output color space onto the input color space).
Another example of color gamut transformation occurs in an adjustable display, like a TV monitor, with adjustable brightness, contrast, and tint which form a control space. For each particular setting of these parameters there is a color transformation map. Dynamic warp system 100, in this example, generates a hybrid space from the color space and control parameters and divides it into hybrid blocks as explained above. In each hybrid block, the system fits the grid transformation data with a hypersurface. Upon selection of a particular setting, the system decodes a warp map from a hypersurface, transforming a color space vector corresponding to the factory standard settings onto any specific setting determined by the user.
In both cases, the dynamic warp map generation allows for mapping to the limitation of the gamut range, and seamless and graceful mapping of the outer edges. In another example of the color gamut transformation of the present invention, a color sensor is used to calibrate an aging printer or monitor. The calibration measurement results are fed back as control parameters. Controller
152 constructs a hybrid space vector out of these measurement results. Decoder
153 then dynamically generates a warp map for the appropriate color gamut transformation. The sensor calibration could also be used when printing on different types of paper as they yield different output colors. A new warp map is generated for a new type of paper, yielding an optimal and seamless gamut transformation. On certain types of paper one could never get great printing results, however, this method assures improved printing quality.
As will be apparent to those skilled in the art, various modifications and adaptations of the structure described above are possible without departing from the present invention, the scope of which is defined in the appended claims.

Claims

Claims
1. An electronic system for dynamic m-dimensional digital data transformation, subject to varying N-dimensional distortion and control parameters, said system comprising: a. an input interface to obtain m-dimensional transformation data and N-dimensional distortion and control parameter sets, b. a concatenation stage coupled to said input interface, to concatenate m-dimensional transformation data and N-dimensional distortion and control parameter sets and to produce m+N dimensional hybrid grid data in a corresponding hybrid vector space, c. a divider coupled to said concatenation stage, to divide said hybrid vector space into a plurality of hybrid blocks, consisting of N- dimensional distortion control zones and m-dimensional geometry patches, based on a desired accuracy level, d. a surface function estimator, coupled to said divider, to parameterize said hybrid grid data, to generate a hypersurface map in each hybrid block, to estimate said hybrid grid data, wherein said hypersurfaces are represented by a plurality of compacted coefficients, e. a surface map interface, coupled to said surface function estimator, to store said compacted coefficients, f. a controller, to obtain instantaneous user and control parameters and to generate a vector in the hybrid vector space based on said user and control parameters, g. a decoder, coupled to said surface map interface and said controller, to dynamically compute instantaneous warp maps from said compacted coefficients corresponding to said vector in the hybrid vector space, and, h. an output interface to store and relay said instantaneous maps.
2. The system of claim 1 , further including an error analysis stage coupled to said surface function estimator, said divider, and said concatenation stage, to check if the differences between a set of points computed from the hypersurface estimation and the same set of points extracted from the hybrid grid data are less than a preset tolerance level, and if not, to send the comparison results to said divider to further refine the hybrid vector space division.
3. An electronic system for dynamic two-dimensional digital image transformation described by two-dimensional spatial grid data and varying geometric and optical distortions described by multidimensional parameter data sets, said system comprising: a. an input interface to obtain two-dimensional spatial grid data and multidimensional distortion parameter data sets representing varying geometric and optical distortions, b. a concatenation stage, coupled to said input interface, to concatenate the multidimensional distortion parameter data sets and the two-dimensional spatial transformation grid data, and to produce hybrid grid data in a corresponding hybrid vector space, c. a divider, coupled to said concatenation stage, to divide said hybrid vector space into a plurality of hybrid blocks, consisting of multidimensional distortion control zones and two-dimensional spatial geometry patches, based on a desired accuracy level, d. a surface function estimator, coupled to said divider, to parameterize said hybrid grid data to generate a hypersurface map in each hybrid block, represented by a number of compacted coefficients, e. a warp interface, coupled to said surface functional estimator, to store said compacted coefficients representing the hypersurface maps, f. a controller to obtain instantaneous control parameters, including display parameters, distortion parameters, and user parameters, and to calculate hybrid space vectors from said control parameters, g. a decoder, coupled to said compacted warp interface and said controller, to dynamically compute instantaneous warp maps from said compacted coefficients corresponding to said hybrid space vectors, and, h. an output interface to store and relay said instantaneous warp maps.
4. The system of claim 3, further including an error analysis stage, coupled to said surface function estimator, said divider, and said concatenation stage, to check if the difference between a set of grid points computed from the hypersurface estimation and the same set of points extracted from the hybrid grid data, is less than a preset tolerance level, and if not, to send the comparison results to said divider to further refine the hybrid vector pace division.
5. The system of claim 3, further including a display converter to generate a map to fit a particular display surface from a standard normalized set by performing scaling operations.
6. The system of claim 3, wherein said surface function estimator is adapted to parameterize the hybrid grid data as surface polynomials.
7. The system of claim 6, wherein said surface functional estimator is adapted to vary said surface polynomials degree according to an accuracy level.
8. The system of claim 3, wherein said surface functional estimator is adapted to vary spatial transformation details according to a preset accuracy level.
9. The system of claim 3, wherein said divider is adapted to vary the number of control zones and geometry patches according to a preset accuracy level.
10. The system of claim 1 used for color gamut transformation, wherein the system is adapted to generate hybrid space blocks by dividing the color space into geometry patches, and dividing associated distortion and control parameter space into control zones, and to fit exact grid data with hypersurfaces in each hybrid space block.
11. The system of claim 10 used in a printer application, wherein the hypersurfaces map a display color space onto a printer color space, said printer having varying distortion and control parameters including drift, aging, paper type, and user selected shades. t
12. The system of claim 11 , further including a color sensor to perform color calibration, and wherein the calibration results are used as control parameters and are converted into a hybrid space vector, to generate a new warp map for color transformation.
13. The system of claim 10 used in a display device, wherein the hypersurfaces map a standard RGB space onto a user selected color space characterized with desired user parameters including brightness, contrast, and tint.
14. The system of claim 13, further including a color sensor to perform color calibration, and wherein the calibration results are used as control parameters and are converted into a hybrid space vector, to generate new warp map for color transformation.
15. An electronic method for dynamic m-dimensional digital data transformation, subject to varying N-dimensional distortion and control parameters, said method comprising: a. obtaining m-dimensional transformation data and N-dimensional distortion and control parameter sets, b. concatenating the m-dimensional transformation data and N- dimensional distortion and control parameter sets and producing m+N dimensional hybrid grid data in a corresponding hybrid vector space, c. dividing said hybrid vector space into a plurality of hybrid blocks, consisting of N-dimensional distortion control zones and m- dimensional geometry patches, based on a desired accuracy level, d. parameterizing said hybrid grid data to generate a hypersurface map in each hybrid block to estimate said hybrid grid data, wherein said hypersurfaces are represented by a plurality of compacted coefficients, e. storing said compacted coefficients, f. obtaining instantaneous user and control parameters and generating a vector in the hybrid vector space based on said user and control parameters, g. dynamically computing instantaneous warp maps from said compacted coefficients corresponding to said vectors in the hybrid vector space obtained in T1 and, h. storing and relaying said instantaneous warp maps.
16. The method of claim 15, further performing an error analysis to check if the differences between a set of points computed from the hypersurface estimation and the same set of points extracted from the hybrid grid data is less than a preset tolerance level, and if not, further refining the hybrid vector space division.
17. An electronic method for dynamic two-dimensional digital image transformation described by two-dimensional spatial grid data and varying geometric and optical distortions described by multidimensional parameter data sets, said method comprising: a. obtaining the two-dimensional spatial grid data and multidimensional distortion parameter data sets representing varying geometric and optical distortions, b. concatenating the multidimensional distortion parameter data sets and the two-dimensional spatial grid data, to produce hybrid grid data in a corresponding hybrid vector space, c. dividing said hybrid vector space into a plurality of hybrid blocks, consisting of multidimensional distortion control zones and two- dimensional spatial geometry patches, based on a desired accuracy level and, d. parameterizing said hybrid grid data to generate a hypersurface in each hybrid block, wherein the hypersurface estimates the hybrid grid data, and wherein the hypersurface is represented by a number of compacted coefficients, e. storing said compacted coefficients representing the hypersurface maps , f. obtaining instantaneous control parameters, including display parameters, distortion parameters, and user parameters, and calculating hybrid space vectors based on said control parameters, g. dynamically computing instantaneous warp maps from said compacted coefficients corresponding to said hybrid space vectors obtained in "f", and, h. storing and relaying said instantaneous warp maps.
18. The method of claim 17, further performing an error analysis to check if the difference between a set of grid points computed from the hypersurface estimation and same set of points extracted from hybrid grid data, is less than a preset tolerance level, and if not, further refining the hybrid vector space division.
19. The method of claim 17, further generating a map to fit a particular display surface from a standard normalized set by performing scaling operations.
20. The method of claim 17, wherein the fitted hypersurfaces are surface polynomials.
21. The method of claim 20, further varying said surface polynomial degrees according to a preset accuracy level.
22. The method of claim 17, further varying spatial transformation details according to a preset accuracy level.
23. The method of claim 17, further varying the number of control zones and geometry patches according to a preset accuracy level.
24. The method of claim 15 used as a color gamut transformation, wherein the method generates hybrid space blocks by dividing the color space into geometry patches, and dividing associated distortion and control parameter space into control zones, and fits grid data with hypersurfaces in each hybrid space block.
25. The method of claim 24 used in a printer application, wherein the hypersurfaces map a display color space onto a printer color space, said printer having varying distortion and control parameters including drift, aging, paper type, and user selected shades.
26. The method of claim 25, further including color sensing to perform color calibration, and wherein the calibration results are used as control parameters and are converted to a hybrid space vector, to generate a new warp map for color transformation.
27. The method of claim 24 used in a display device, wherein the hypersurfaces map a standard RGB space onto a user selected color space characterized with desired user parameters including brightness, contrast, and tint.
28. The method of claim 27, further including color sensing to perform color calibration, and wherein the calibration results are used as control parameters and are converted to a hybrid space vector, to generate a new warp map for color transformation.
PCT/US2004/040851 2004-12-07 2004-12-07 Dynamic warp map generation system and method WO2006062508A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2007545427A JP2008526055A (en) 2004-12-07 2004-12-07 Dynamic warp map generation system and method
PCT/US2004/040851 WO2006062508A1 (en) 2004-12-07 2004-12-07 Dynamic warp map generation system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2004/040851 WO2006062508A1 (en) 2004-12-07 2004-12-07 Dynamic warp map generation system and method

Publications (1)

Publication Number Publication Date
WO2006062508A1 true WO2006062508A1 (en) 2006-06-15

Family

ID=36578208

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2004/040851 WO2006062508A1 (en) 2004-12-07 2004-12-07 Dynamic warp map generation system and method

Country Status (2)

Country Link
JP (1) JP2008526055A (en)
WO (1) WO2006062508A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008113416A (en) * 2006-08-11 2008-05-15 Silicon Optix Inc System and method for automatic calibration and correction of shape of display and color
JP2009239638A (en) * 2008-03-27 2009-10-15 Seiko Epson Corp Method for correcting distortion of image projected by projector, and projector
EP2184915A1 (en) * 2007-08-31 2010-05-12 Silicon Hive B.V. Image processing device, image processing method, and image processing program
JP2015032313A (en) * 2013-08-01 2015-02-16 シゼイ シジブイ カンパニー リミテッド Image correction method and apparatus using creation of feature points

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6753907B1 (en) * 1999-12-23 2004-06-22 Justsystem Corporation Method and apparatus for automatic keystone correction

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3430178B2 (en) * 1993-12-16 2003-07-28 株式会社リコー Color correction method and apparatus for image processing system
JP3497805B2 (en) * 2000-08-29 2004-02-16 オリンパス株式会社 Image projection display device
JP2003287707A (en) * 2002-03-27 2003-10-10 Denso Corp Image conversion method, image processor, headup display and program
JP2004090540A (en) * 2002-09-03 2004-03-25 Ricoh Co Ltd Method of correcting image distortion and image forming apparatus
JP2004234379A (en) * 2003-01-30 2004-08-19 Sony Corp Image processing method, image processor, and imaging device and display device to which image processing method is applied

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6753907B1 (en) * 1999-12-23 2004-06-22 Justsystem Corporation Method and apparatus for automatic keystone correction

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008113416A (en) * 2006-08-11 2008-05-15 Silicon Optix Inc System and method for automatic calibration and correction of shape of display and color
EP2184915A1 (en) * 2007-08-31 2010-05-12 Silicon Hive B.V. Image processing device, image processing method, and image processing program
EP2184915A4 (en) * 2007-08-31 2011-04-06 Silicon Hive Bv Image processing device, image processing method, and image processing program
US9516285B2 (en) 2007-08-31 2016-12-06 Intel Corporation Image processing device, image processing method, and image processing program
JP2009239638A (en) * 2008-03-27 2009-10-15 Seiko Epson Corp Method for correcting distortion of image projected by projector, and projector
JP2015032313A (en) * 2013-08-01 2015-02-16 シゼイ シジブイ カンパニー リミテッド Image correction method and apparatus using creation of feature points
US10043094B2 (en) 2013-08-01 2018-08-07 Cj Cgv Co., Ltd. Image correction method and apparatus using creation of feature points

Also Published As

Publication number Publication date
JP2008526055A (en) 2008-07-17

Similar Documents

Publication Publication Date Title
US7359575B2 (en) Dynamic warp map generation system and method
EP1395952B1 (en) Method and system for processing a non-linear two dimensional spatial transformation
JP4468442B2 (en) Imaging system performance measurement
US7324706B2 (en) System and method for representing a general two dimensional spatial transformation
EP3251346B1 (en) Digital multi-dimensional image photon platform system and methods of use
US8768094B2 (en) System and method for automated calibration and correction of display geometry and color
CN100365658C (en) System and method for electronic correction of optical anomalies
EP1367538A2 (en) Image processing method and system
EP1349116B1 (en) Process for modelling a 3d scene
US20060285135A1 (en) System and method for dynamically generated uniform color objects
EP1800245B1 (en) System and method for representing a general two dimensional spatial transformation
US20080106546A1 (en) Method and device for generating 3d images
KR20080014712A (en) System and method for automated calibrationand correction of display geometry and color
EP1505822B9 (en) Method and system for controlling out-of-gamut colors
US20160381254A1 (en) Color gamut mapping based on the mapping of cusp colors obtained through simplified cusp lines
JP2011259047A (en) Color correction device, color correction method, and video camera system
KR20130003135A (en) Apparatus and method for capturing light field geometry using multi-view camera
JP2006345187A (en) Color processing method and device thereof
JP2005520440A (en) Variational model for spatially dependent gamut mapping
CN116071484A (en) Billion pixel-level intelligent reconstruction method and device for large-scene sparse light field
KR20170025214A (en) Method for Multi-view Depth Map Generation
US20120038785A1 (en) Method for producing high resolution image
WO2006062508A1 (en) Dynamic warp map generation system and method
CN112200852B (en) Stereo matching method and system for space-time hybrid modulation
US20130163856A1 (en) Apparatus and method for enhancing stereoscopic image, recorded medium thereof

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2007545427

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 1020077015641

Country of ref document: KR

122 Ep: pct application non-entry in european phase

Ref document number: 04813200

Country of ref document: EP

Kind code of ref document: A1