CN100573595C - Virtual visual point image generating method and three-dimensional image display method and device - Google Patents

Virtual visual point image generating method and three-dimensional image display method and device Download PDF

Info

Publication number
CN100573595C
CN100573595C CNB200480017333XA CN200480017333A CN100573595C CN 100573595 C CN100573595 C CN 100573595C CN B200480017333X A CNB200480017333X A CN B200480017333XA CN 200480017333 A CN200480017333 A CN 200480017333A CN 100573595 C CN100573595 C CN 100573595C
Authority
CN
China
Prior art keywords
subpoint
image
unit
subject
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CNB200480017333XA
Other languages
Chinese (zh)
Other versions
CN1809844A (en
Inventor
国田丰
桥本秋彦
陶山史朗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Telegraph and Telephone Corp
Original Assignee
Nippon Telegraph and Telephone Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Telegraph and Telephone Corp filed Critical Nippon Telegraph and Telephone Corp
Publication of CN1809844A publication Critical patent/CN1809844A/en
Application granted granted Critical
Publication of CN100573595C publication Critical patent/CN100573595C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Provide based on a plurality of subject images that photograph by a plurality of video cameras, generated virtual visual point image generating method as the virtual visual point image of the image when virtual view is seen subject.In this virtual visual point image generating method, setting has the projecting plane of sandwich construction, obtain the corresponding point on described each subject image corresponding with each subpoint on the described subpoint, determine the colouring information of described subpoint based on the colouring information of a plurality of corresponding point, be seen as a plurality of subpoints of coincidence for certain referenced viewpoints from the space, based on the degree of correlation of described corresponding point calculate with the corresponding distance in the position of described each subpoint on have the degree of the possibility of described subject, for the colouring information that is seen as the reference point of coincidence from described virtual view, carry out the corresponding hybrid processing of possibility degree that exists with described subject, thereby determine each the color of pixel information in the described virtual visual point image.

Description

Virtual visual point image generating method and three-dimensional image display method and device
Technical field
The present invention relates to estimate the information relevant and use this information to generate the technology of image with the 3D shape of object according to a plurality of images.Technology of the present invention for example can be applied in the system of support face-to-face communication of videophone etc.
Background technology
In the past, at computer graphical (CG; Computer Graphics) or virtual reality (VR; Virtual Reality) in the field, broad research generate not only from viewpoint position that video camera is set but also the technology of the subject image seen from the viewpoint position that the user wishes by computing machine.
Following method is for example arranged: use a plurality of images of under different conditions, taking subject and obtaining, show the 3-D view of subject, perhaps generate the image (below, be called virtual visual point image) of seeing subject from virtual viewpoint.
As the method for the three-dimensional image that shows object, for example there is use such as DFD (Depth-Fused3-D, the degree of depth merges three-dimensional) display, have the method for the display of a plurality of picture display faces.Described DFD is the display (for example, reference literature 1: Japan speciallys permit No. 3022558 communique) that has overlapped a plurality of picture display faces with a certain interval.In addition, described DFD roughly is divided into intensification modulation type and infiltration type.
When in described DFD, showing the image of described object, for example, on each picture display face, show the two dimensional image of shown object.At this moment, if described DFD is the intensification modulation type, then will be seen as from the viewpoint (referenced viewpoints) of the predefined observer on described each display surface coincidence pixel brightness settings for the depth direction of described object on the corresponding ratio of shape show.Like this, for certain point on the described object, be seen as the brightness that is in the pixel on the forward picture display face from the observer and become big, and for other point, the brightness that is seen as the pixel on the inboard display surface from the observer becomes big.Consequently, the observer that shown image on each picture display face of described DFD is observed can observe the stereo-picture (3-D view) of described object.
In addition, if described DFD is an infiltration type, the corresponding ratio of shape that the permeability that then will be seen as the pixel of coincidence from the viewpoint (referenced viewpoints) of the predefined observer on described each picture display face is set at the depth direction of described object shows.
In addition, as the method for the three-dimensional image that shows described object, except the display packing of using described DFD, for example also has the method that on an image of LCD etc., shows two images with parallax suitable with the interval of observer's right and left eyes.
When generating such being used to and show the image of the image of three-dimensional image of object or the described object seen from viewpoint arbitrarily, for example generate and under the known situation, use its model to generate described each image and get final product in the 3D shape of described object by computer graphical etc.On the other hand, under the 3D shape of described object is not known situation, before generating described each image, must obtain described object 3D shape, be geometric model.
In addition, when using described a plurality of images to generate described virtual visual point image, also must at first obtain the geometric model of described subject based on described a plurality of images.The geometric model of the described subject that obtains at this moment, for example shows as the set of the fundamental figure that is called polygon or voxel (voxel).
The method of geometric model that obtains described subject based on described a plurality of images is various, in the field of computer vision (computer vision), has carried out a lot of research as Shape from X.In described Shape from X representational model adquisitiones be anaglyph (for example, reference literature 2: " Takeo Kanade et al.: " Virtualized Reality:Constructing Virtual Worlds from Real Scenes; " IEEE MultiMedia, Vol.4, No.1, pp.34-37,1997 ".)。
In described anaglyph,, obtain the geometric model of described subject based on a plurality of images that obtain from different viewpoint shooting subjects.At this moment, be used to obtain model from referenced viewpoints to described subject on the distance of each point be for example,, to obtain by the principle of triangulation by carrying out corresponding point matching, being that the correspondence of the point (pixel) on each image is set up.But, at this moment, be not immediately to obtain the geometric model of described subject by described anaglyph, and what obtain is the lip-deep point of subject group.Therefore, in order to obtain the geometric model of described subject, need to determine that structural information which kind of face how the each point that is comprised in the described some group to connect, constitute each other (for example, reference literature 3: " pond Nei Keshi: " portrait To I Ru real object モ デ Le makes "; Japanese ロ ボ Star ト learns meeting Chi, Vol.16, No.6; pp.763-766,1998 ").
That is, using described anaglyph to obtain in the method for geometric model of described subject, essential device (computing machine) by the generation image carry out the processing of complexity such as the application of shape of described subject or statistical treatment.Therefore, need higher computer capacity.
In addition, in the method for the geometric model that obtains described subject based on a plurality of images, as the representational model adquisitiones arranged side by side with described anaglyph, have based on the profile of taking the subject of each image that obtains in a plurality of viewpoints, determine the zone that described subject occupies in the space the method that is called Shape from Silhouette (below, be called Shape from Silhouette method) (for example, reference literature 4: " Potmesil; M: " Generating Octree Models of3D Objects from their Silhouettes in a Sequence of Images; " CVGIP40, pp.1-29,1987 ".)。
The geometric model of the described subject that obtains by described Shape from Silhouette method shows as the small cubical set that is called voxel more.But under the situation of the geometric model that shows described subject with described voxel, the required data volume of 3D shape that shows described subject becomes huge amount.Therefore, the computer capacity of having relatively high expectations for the geometric model that uses described Shape from Silhouette method to obtain described subject.
Therefore, in recent years, proposed to replace anaglyph as described or described Shape fromSilhouette method like this with the geometric model of the described subjects of performance such as polygon or voxel, and for example, parts of images to described subject on the projecting plane of sandwich construction carries out texture (texture mapping), show with multilayer planar described subject 3D shape method (for example, reference literature 5 " Jonathan Shade et al.: " Layered DepthImages; " SIGGRAPH98 Conference Proceedings, pp.231-242,1998 "; document 6 " many lakes, new field, seedling village Yuan Island: " the レ イ ヤ Biao Now The of Move い Video-BasedRendering ", 3 dimensions portrait コ Application Off ア レ Application ス 2001, pp.33-36,2001 ".)。
Described texture is following method: the projecting plane of setting sandwich construction, the parts of images (texture image) that will cut from the image of described shooting fits on the pairing projecting plane of distance of the object that this texture image reflects, thereby obtains three-dimensional visual effect.Therefore, has following advantage: even the graphic hardware that is carried in the general personal computer of popularizing also can carry out enough processing at a high speed, and the processing of data is easier to.
But, on the other hand,, when the setting on described projecting plane is excessive at interval, can't show the detailed shape of described subject using multilayer planar based on described texture to show under the situation of geometric model of described subject.Therefore, carry out following processing: with projecting plane (plane) performance shape substantially, for trickle shape, add for example colouring information of R (red), G (green), B (indigo plant) at each pixel of described texture image, and make it have a value (degree of depth (depth) value) again.In described document 5, following method has been proposed: change locations of pixels in each texture image according to described depth value, the trickle degree of depth that performance only then can't show with described multilayer planar.In addition, in described document 6, following method has been proposed: set the transparency of each pixel according to described depth value, the trickle degree of depth that performance only then can't show with described multilayer planar.
Summary of the invention
But, the described anaglyph of use in the method for the geometric model that obtains subject obtains in the method for geometric model of described subject, be subjected to the shape of described subject or the pattern (texture) on surface, the influence of described subject surrounding environment easily, may not (for example can obtain the high range information of reliability for the subject of Any shape and for any point on the described subject, reference literature 7: " richness difficult to understand is just quick: " ス テ レ オ Ga な ぜ Difficult い か "; Japanese ロ ボ Star ト Hui Chi; Vol.16; No.6; pp.39-43,1998 ".)。
Estimate described subject apart from the time, if the reliability that should estimate is low, then estimate wrong distance sometimes.Under the situation of distance estimations mistake, use the virtual visual point image of the geometric model generation of described subject for example to produce discontinuous noise in the part of described distance estimations mistake.Therefore, for example as shown in Figure 1, described virtual visual point image looks like and produced defect area 7B in the part of subject 7.
In addition, described Shape from Silhouette method is that the described subject of hypothesis is the method that convex form obtains the geometric model of described subject on principle.Therefore, produce following problems: on the whole or when certain part is concave shape, can't obtain the correct model of described subject in described subject.
In addition, described Shape from Silhouette method is difficult to extract exactly the background on the image and the profile of described subject, and the method for extracting exactly also becomes the main research topic in the computer vision field at present.That is, the geometric model of the described subject that obtains by described Shape from Silhouette method is based on the model that inaccurate profile obtains, and not talkative its reliability is enough high.Therefore, have following problems: the image that generates according to the geometric model of the described subject that obtains by described Shape fromSilhouette method is not can abundant gratifying image quality.
In addition, as described texture like that by multilayer planar show the method for 3D shape of described subject known with the described depth value of giving each texture pixel, promptly obtained the prerequisite that is shaped as of described subject exactly.Therefore, under the shape condition of unknown of described object, must at first obtain the geometric model of described subject.Consequently, there is following problems: when estimating the shape of described subject,, then sometimes described texture image is pasted on the wrong projecting plane, sometimes the image of Sheng Chenging deterioration significantly if there is the low part of reliability.
In addition, in the method for the 3D shape that shows described subject by described texture, processing to the projecting plane of described sandwich construction reproducing image is at a high speed, but in the processing of obtaining described depth value, if obtain the shape of described subject exactly, then need the high processing ability.
As mentioned above, in the prior art, there is following problems: when estimating the shape of described subject, if there is the low part of the reliability of this estimation, then the estimation of distance makes mistakes easily, produce discontinuous noise on the image that is generated, thereby image quality is low.
In addition, the wrong caused image quality of estimating for the shape that prevents described subject is low, should improve the reliability of estimation, but must use a plurality of images for this reason and carry out tight computing, thereby obtain the geometric model accurately of described subject.But in this case, the device of generation virtual visual point image etc. needs high processing performance (computer capacity).Therefore, also there is the problem that is difficult to generate at high speed the less virtual visual point image of image quality deterioration by the general personal computer of popularizing etc.
In addition, for the reliability of the geometric model that improves subject, the image that need take from more viewpoint.Therefore, produce camera maximization, the complicated problem of apparatus structure.
The object of the present invention is to provide when the 3D shape that obtains subject according to a plurality of images generates the image of subject, can be reduced in the technology of the low significant image quality deterioration that part produced of the shape computed reliability of subject.
When another object of the present invention is to be provided at the 3D shape that obtains subject according to a plurality of images and generating the image of subject,, also can reduce part image quality deterioration, and generate the technology of image at short notice even with the lower device of handling property.
Another object of the present invention is to provide the camera miniaturization that can make the image in the geometric model that obtains subject for shooting, and make the technology of simplified.
Above-mentioned and other purpose of the present invention and new feature can become clear by the record and the accompanying drawing of this instructions.
In order to solve described problem, the present invention can constitute virtual visual point image generating method, and described virtual visual point image generating method has: the step that obtains a plurality of subject images that photographed by a plurality of video cameras; Determine step as the virtual view of the position of observing described subject; And based on the image of the described subject that obtains, generate step as the virtual visual point image of the image when described virtual view is observed subject, it is characterized in that the step that generates described virtual visual point image has: step 1, set projecting plane with sandwich construction; Step 2 is obtained the corresponding point on described each subject image corresponding with each subpoint on the described projecting plane; Step 3 based on the colouring information or the monochrome information of a plurality of corresponding point, is determined the colouring information or the monochrome information of described subpoint; Step 4, with the projecting plane under the described subpoint the viewpoint of intrinsic video camera as referenced viewpoints, be seen as a plurality of subpoints of coincidence for the described referenced viewpoints from the space, based on the degree of correlation of described corresponding point or its near zone, calculate with the corresponding distance in the position of described each subpoint on have the degree of the possibility of described subject; Step 5 is carried out and the corresponding hybrid processing of possible degree that has described subject the colouring information or the monochrome information of the reference point that is seen as coincidence from described virtual view, thereby determines each color of pixel information or monochrome information in the described virtual visual point image; And step 6, for the corresponding all points of the pixel of described virtual visual point image, repeat described step 1 to step 5.
In addition, image generating method of the present invention also can constitute and have: the step that obtains the image that obtains from a plurality of different viewpoints shooting subjects; Obtain the step of the 3D shape of described subject according to described a plurality of images; And based on the 3D shape of obtained described subject, the step of the image of the described subject that generation is seen from observer's viewpoint, it is characterized in that the step that obtains the 3D shape of described subject has: the step of on virtual three dimensions, setting the projecting plane of sandwich construction; Be identified for obtaining the step of referenced viewpoints of the 3D shape of described subject; According to colouring information or monochrome information as the corresponding point on the corresponding described image of obtaining of the subpoint of the point on the described projecting plane, determine the step of the colouring information of described subpoint; Calculate the step of the degree of correlation between the corresponding point corresponding with described subpoint; And for a plurality of subpoints that are seen as coincidence from described referenced viewpoints, the degree of correlation based on described each subpoint, determine that the step of calculating the described degree of correlation has as the step that has probability that has the probability of object surfaces on described each subpoint: prepare the step of many groups as the shooting unit of the combination of several viewpoints of from described a plurality of viewpoints, selecting; And according to the corresponding point on the image that is comprised in the described unit of respectively making a video recording, obtain the step of the degree of correlation, determine that the described step of probability that exists has: the step that has probability based on the relatedness computation of described each subpoint of obtaining at each described shooting unit; And carrying out, thereby the step that has probability of definite described each subpoint at the definite overall treatment that has probability of each described shooting unit.
In addition, the present invention can constitute following image generating method, and this image generating method has: change focusing from the step of taking a plurality of images that subject obtains thereby obtain; Setting is as the step of the virtual view of the viewpoint of observing the subject that is reflected in described a plurality of images; Obtain the step of the 3D shape of described subject according to described a plurality of images; And based on the 3D shape of the described subject that obtains, the step of the image of the described subject that generation is seen from described virtual view, it is characterized in that the step that obtains the 3D shape of described subject has: the step of on virtual three dimensions, setting the projecting plane of sandwich construction; Be identified for obtaining the step of referenced viewpoints of the 3D shape of described subject; According to colouring information or monochrome information as the corresponding point on corresponding described each image obtained of the subpoint of the point on the described projecting plane, determine the colouring information of described subpoint or the step of monochrome information; According to the corresponding point corresponding with described subpoint to focal power, determine the step to focal power of described subpoint; And for a plurality of subpoints that are seen as coincidence from described referenced viewpoints, based on described each subpoint to focal power, determine as and the corresponding distance in position of described each subpoint on have the step that has probability of probability on the surface of described subject, the step of the image of the described subject that generation is seen from described virtual view according to the described corresponding ratio of probability that exists, mixing is seen as the colouring information or the monochrome information of the subpoint of coincidence from described virtual view, thus the colouring information or the monochrome information of the each point on the image of determining to generate.
In addition, the present invention also can constitute image generating method, and this image generating method has: the step that obtains a plurality of images of taking subject and obtain under different conditions; Obtain the step of the 3D shape of described subject according to described a plurality of images; And based on the 3D shape of the described subject that obtains, the step of the image of the described subject that generation is seen from described observer's viewpoint, it is characterized in that the step that obtains the 3D shape of described subject has: the step of on virtual three dimensions, setting the projecting plane of sandwich construction; Be identified for obtaining the step of referenced viewpoints of the 3D shape of described subject; According to colouring information or monochrome information as the corresponding point on the corresponding described image of obtaining of the subpoint of the point on the described projecting plane, determine the colouring information of described subpoint or the step of monochrome information; And for a plurality of subpoints that are seen as coincidence from described referenced viewpoints, determine to determine that as the step that has probability of the probability on the surface that on described each subpoint, has subject the described step of probability that exists has: the step of calculating the metewand value of described each subpoint according to the image information of described corresponding point; Carry out the step of statistical treatment of the metewand value of described each subpoint; And calculate the step that has probability of described each subpoint based on having carried out metewand value after the described statistical treatment.
In addition, the present invention also can constitute following three-dimensional image display method, and this three-dimensional image display method has: the step that obtains a plurality of images of taking subject and obtain under different conditions; Obtain the step of the 3D shape of described subject according to described a plurality of images; Set the observer and observe the step that is positioned at the viewpoint position of a plurality of picture display faces on the different depth position In the view of described observer; Be created on the step of the two dimensional image that shows on described each picture display face based on the 3D shape of the described subject that obtains; And by showing that on described each display surface the two dimensional image of described generation illustrates the step of the three-dimensional image of described subject, it is characterized in that the step that obtains the 3D shape of described subject has: the step of on virtual three dimensions, setting the projecting plane of sandwich construction; Be identified for obtaining the step of referenced viewpoints of the 3D shape of described subject; According to colouring information or monochrome information as the corresponding point on the corresponding described image of obtaining of the subpoint of the point on the described projecting plane, determine the colouring information of described subpoint or the step of monochrome information; And for a plurality of subpoints that are seen as coincidence from described referenced viewpoints, determine the step that has probability as the probability on the surface that on described each subpoint, has subject, the step that generates described two dimensional image is with the colouring information or the monochrome information of described subpoint and exist probability to be converted to as the colouring information or the monochrome information of the display dot of the point on the described picture display face corresponding with the projecting plane that has described subpoint and have probability, thereby generate described two dimensional image, illustrate described subject three-dimensional image step according to described colouring information or the monochrome information that exists the corresponding brightness of probability to show described each display dot.
According to the present invention, when the 3D shape that obtains subject according to a plurality of images generates the image of subject, can be reduced in the remarkable image quality aggravation that the low part of the shape computed reliability of subject produces.In addition, even in the lower device of handling property, also can reduce the part image quality aggravation, and generate image at short notice.And, can make the camera miniaturization of the image in the geometric model that obtains subject for shooting, and make simplified.
Description of drawings
Fig. 1 is the figure that is used to illustrate the problem points of existing virtual visual point image.
Fig. 2 is the synoptic diagram of principle that is used for illustrating the virtual visual point image generating method of first embodiment, is the figure of an example of expression projecting plane group, video camera, referenced viewpoints, subpoint, corresponding point.
Fig. 3 is the synoptic diagram of principle that is used for illustrating the virtual visual point image generating method of first embodiment.
Fig. 4 is the synoptic diagram of principle that is used for illustrating the virtual visual point image generating method of first embodiment, is expression and the figure of an example of the corresponding hybrid processing of transparency of subpoint.
Fig. 5 is the synoptic diagram of principle that is used for illustrating the virtual visual point image generating method of first embodiment, is the figure of an example of expression subject, projecting plane group, referenced viewpoints, virtual view, subpoint.
Fig. 6 is the synoptic diagram of schematic configuration of the virtual visual point image generating apparatus of expression embodiment 1-1, is the block scheme of the structure of presentation video generating apparatus inside.
Fig. 7 is the synoptic diagram of schematic configuration of the virtual visual point image generating apparatus of expression embodiment 1-1, is the figure of the structure example of the expression system that used video generation device.
Fig. 8 is the synoptic diagram of mathematical model that is used to illustrate the virtual visual point image generating method of the virtual visual point image generating apparatus that has used embodiment 1-1, is the figure of an example of expression projective transformation.
Fig. 9 is the synoptic diagram of mathematical model that is used to illustrate the virtual visual point image generating method of the virtual visual point image generating apparatus that has used embodiment 1-1, is the figure of an example of denotation coordination conversion.
Figure 10 is the synoptic diagram of generation processing procedure that is used to illustrate the virtual visual point image of embodiment 1-1, is to generate to handle whole process flow diagram.
Figure 11 is the synoptic diagram of generation processing procedure that is used to illustrate the virtual visual point image of embodiment 1-1, is the concrete process flow diagram that generates the step of virtual visual point image.
Figure 12 is the synoptic diagram of generation processing procedure that is used to illustrate the virtual visual point image of embodiment 1-1, is the figure of an example of the establishing method on expression projecting plane.
Figure 13 is the synoptic diagram of generation processing procedure that is used to illustrate the virtual visual point image of embodiment 1-1, is the figure of an example of the set of expression subpoint, subpoint string, subpoint string.
Figure 14 is the synoptic diagram of generation processing procedure that is used to illustrate the virtual visual point image of embodiment 1-1, is the figure of an example of the angle that is made of referenced viewpoints, subpoint, camera position of the expression hybrid processing that is used for account for color information.
Figure 15 is the synoptic diagram of generation processing procedure that is used to illustrate the virtual visual point image of embodiment 1-1, is the figure of an example handling of expression corresponding point matching.
Figure 16 is the synoptic diagram of generation processing procedure that is used to illustrate the virtual visual point image of embodiment 1-1, is to be used to illustrate the figure that plays up processing.
Figure 17 is the synoptic diagram of generation processing procedure that is used to illustrate the virtual visual point image of embodiment 1-1, is the figure of an example of the virtual visual point image that generates of expression.
Figure 18 is the synoptic diagram of application examples of the system of the expression virtual visual point image generating apparatus of having used embodiment 1-1.
Figure 19 (a) is the process flow diagram of processing that expression becomes the feature of embodiment 1-2, (b) is the process flow diagram of an example of the concrete processing procedure of the expression step of determining transparence information.
Figure 20 is the synoptic diagram that is used to illustrate the virtual visual point image generating method of embodiment 1-3, is the figure of an example of expression projecting plane group, referenced viewpoints, virtual view, subpoint.
Figure 21 is the synoptic diagram of principle that is used to illustrate the image generating method of second embodiment, is the figure of the notion of explanation generation method.
Figure 22 is the synoptic diagram of principle that is used to illustrate the image generating method of second embodiment, is the figure that shows Figure 21 two-dimensionally.
Figure 23 is the figure of the method for obtaining of the degree of correlation of explanation corresponding point.
Figure 24 be the explanation when obtaining the degree of correlation of corresponding point, become problem aspect figure.
Figure 25 is the synoptic diagram of principle that is used to illustrate the image generating method of second embodiment, is the figure that explanation solves the method for the problem when obtaining the degree of correlation.
Figure 26 is the figure that an example of the method that improves the precision that has probability is described.
Figure 27 is the synoptic diagram of principle that is used to illustrate the image generating method of second embodiment.
Figure 28 is the synoptic diagram of principle that is used to illustrate the image generating method of second embodiment.
Figure 29 is the synoptic diagram that is used to illustrate the image generating method of embodiment 2-1, is the process flow diagram of an example of the whole processing procedure of expression.
Figure 30 is the synoptic diagram that is used to illustrate the image generating method of embodiment 2-1, is the process flow diagram of an example of processing procedure that expression is determined the colouring information of the subpoint among Figure 29 and had the step of probability.
Figure 31 is the synoptic diagram that is used to illustrate the image generating method of embodiment 2-1, is the process flow diagram that an example of the step that has probability among Figure 30 is determined in expression.
Figure 32 is the synoptic diagram that is used to illustrate the image generating method of embodiment 2-1, is the figure of the setting example of expression shooting unit.
Figure 33 is the synoptic diagram that is used to illustrate the image generating method of embodiment 2-1, be explanation with the information translation on projecting plane is the figure of method of the information of display surface.
Figure 34 is that explanation is the figure of method of the information of display surface with the information translation on projecting plane.
Figure 35 is the block scheme of structure example of the video generation device of the expression image generating method of having used embodiment 2-1.
Figure 36 is the figure of structure example of the image display system of the expression video generation device that used the image generating method that adopts embodiment 2-1.
Figure 37 is the figure of another structure example of the image display system of the expression video generation device that used the image generating method that adopts embodiment 2-1.
Figure 38 is the synoptic diagram that is used to illustrate the image generating method of embodiment 2-2, is the figure of an example of the whole processing procedure of expression.
Figure 39 is the synoptic diagram that is used to illustrate the image generating method of embodiment 2-2, is the figure of the principle played up of explanation.
Figure 40 is the synoptic diagram that is used to illustrate the image generating method of embodiment 2-2, be explanation in the image generating method of present embodiment 2, become problem aspect figure.
Figure 41 is the synoptic diagram that is used to illustrate the image generating method of embodiment 2-2, be explanation in the image generating method of present embodiment 2, become problem aspect the figure of solution.
Figure 42 is the synoptic diagram that is used to illustrate the image generating method of embodiment 2-2, is that expression will exist probability to be converted to the process flow diagram of an example of the processing procedure of transparency.
Figure 43 is the synoptic diagram of principle that is used to illustrate the image generating method of the 3rd embodiment, is the figure of the setting example of expression projecting plane and referenced viewpoints.
Figure 44 is the synoptic diagram of principle that is used to illustrate the image generating method of the 3rd embodiment, is the figure of the setting example of expression projecting plane and referenced viewpoints.
Figure 45 is the synoptic diagram of principle that is used to illustrate the image generating method of the 3rd embodiment, is the colouring information of explanation subpoint and to the figure of definite method of focal power.
Figure 46 is the synoptic diagram of principle that is used to illustrate the image generating method of the 3rd embodiment, is the figure that has method of determining probability of explanation subpoint.
Figure 47 is the synoptic diagram of principle that is used to illustrate the image generating method of the 3rd embodiment, is the figure that has method of determining probability of explanation subpoint.
Figure 48 is the synoptic diagram of principle that is used to illustrate the image generating method of the 3rd embodiment, is the figure that has method of determining probability of explanation subpoint.
Figure 49 is the synoptic diagram of principle that is used to illustrate the image generating method of the 3rd embodiment, the figure of the generation method of the image that to be explanation see from virtual view.
Figure 50 is the synoptic diagram of principle that is used to illustrate the image generating method of the 3rd embodiment, be explanation in image generating method of the present invention, become problem aspect figure.
Figure 51 is the synoptic diagram of principle that is used to illustrate the image generating method of the 3rd embodiment, be explanation solve in image generating method of the present invention, become problem aspect the figure of method.
Figure 52 is the synoptic diagram of mathematical model that is used to illustrate the image generating method of the 3rd embodiment, is expression subpoint, corresponding point, the figure of the relation of the point on the image that generated.
Figure 53 is the synoptic diagram of mathematical model that is used to illustrate the image generating method of the 3rd embodiment, is the point on the explanation space and the figure of the conversion method between the pixel on the image.
Figure 54 is the synoptic diagram that is used to illustrate the image generating method of embodiment 3-1, is the process flow diagram of the generative process of presentation video.
Figure 55 is the synoptic diagram that is used to illustrate the image generating method of embodiment 3-1, is the figure of the establishing method of explanation subpoint string.
Figure 56 is the synoptic diagram that is used to illustrate the image generating method of embodiment 3-1, is the process flow diagram of concrete example of processing of the step 10305 of expression Figure 54.
Figure 57 is the synoptic diagram that is used to illustrate the image generating method of embodiment 3-1, is the figure of the method played up of explanation.
Figure 58 is the synoptic diagram of expression by the schematic configuration of the device of the image generating method generation image of embodiment 3-1, is the block scheme of the structure of indication device.
Figure 59 is the figure of the structure example of the subject image photography unit among the explanation embodiment 3-1.
Figure 60 is the figure of the structure example of the subject image photography unit among the explanation embodiment 3-1.
Figure 61 is the figure of the structure example of the subject image photography unit among the explanation embodiment 3-1.
Figure 62 is the synoptic diagram of schematic configuration of the image generation system of the expression video generation device that used embodiment 3-1, is the figure of a structure example of presentation video generation system.
Figure 63 is the synoptic diagram of schematic configuration of the image generation system of the expression video generation device that used embodiment 3-1, is the figure of another structure example of presentation video generation system.
Figure 64 is the process flow diagram of processing of the virtual visual point image generating method of expression embodiment 3-2.
Figure 65 is the synoptic diagram of another generation method that is used for illustrating the image generating method of the 3rd embodiment.
Figure 66 is the synoptic diagram that is used to illustrate based on the image generating method of the embodiment 4-1 of the 3rd embodiment, is the process flow diagram of an example of the whole processing procedure of expression.
Figure 67 is the synoptic diagram that is used to illustrate the image generating method of embodiment 4-1, is the figure of an example of the establishing method on expression projecting plane.
Figure 68 is the synoptic diagram that is used to illustrate the image generating method of embodiment 4-1, is the figure of an example of the establishing method on expression projecting plane.
Figure 69 is the synoptic diagram that is used to illustrate the image generating method of embodiment 4-1, is the figure of the establishing method of explanation subpoint string.
Figure 70 is the synoptic diagram that is used to illustrate the image generating method of embodiment 4-1, is the process flow diagram of an example of processing procedure that expression is determined the colouring information of subpoint and had the step of probability.
Figure 71 is the synoptic diagram that is used to illustrate the image generating method of embodiment 4-1, is the figure that there is method of determining probability in explanation.
Figure 72 is the synoptic diagram that is used to illustrate the image generating method of embodiment 4-1, is the figure that there is method of determining probability in explanation.
Figure 73 is the synoptic diagram that is used to illustrate the image generating method of embodiment 4-1, is the figure that there is method of determining probability in explanation.
Figure 74 is the synoptic diagram that is used to illustrate the image generating method of embodiment 4-1, is the figure that there is method of determining probability in explanation.
Figure 75 is the synoptic diagram that is used to illustrate the image generating method of embodiment 4-1, is the figure of the generation method of two dimensional image shown on each picture display face of explanation.
Figure 76 is the synoptic diagram that is used to illustrate the image generating method of embodiment 4-1, is the figure of the generation method of two dimensional image shown on each picture display face of explanation.
Figure 77 is the synoptic diagram that is used to illustrate the image generating method of embodiment 4-1, is the figure of the generation method of two dimensional image shown on each picture display face of explanation.
Figure 78 is the synoptic diagram that is used to illustrate the image generating method of embodiment 4-2, is the figure of the relation between expression subpoint and the corresponding point.
Figure 79 is the synoptic diagram that is used to illustrate the image generating method of embodiment 4-2, is the process flow diagram of an example that expression is determined the colouring information of subpoint and had the step of probability.
Figure 80 is the synoptic diagram that is used to illustrate the image generating method of embodiment 4-2, is the figure that there is the method for obtaining of probability in explanation.
Figure 81 is the synoptic diagram that is used to illustrate the image generating method of embodiment 4-2, is the figure that there is the method for obtaining of probability in explanation.
Figure 82 is the synoptic diagram that is used to illustrate any visual point image generating method of embodiment 4-3, is the process flow diagram of an example of the whole processing procedure of expression.
Figure 83 is the synoptic diagram that is used to illustrate any visual point image generating method of embodiment 4-3, is the figure of the principle played up of explanation.
Figure 84 is the process flow diagram that is illustrated in an example of the processing procedure that will exist probability to be converted to transparency among the embodiment 4-3.
Figure 85 is the synoptic diagram of schematic configuration of the video generation device of expression embodiment 4-4.
Figure 86 is the synoptic diagram of schematic configuration of the video generation device of expression embodiment 4-4.
Figure 87 is the synoptic diagram of schematic configuration of the video generation device of expression embodiment 4-4, is the figure of the structure example of the expression image display system that used video generation device.
Figure 88 is the synoptic diagram of schematic configuration of the video generation device of expression embodiment 4-4, is the figure of the structure example of the expression image display system that used video generation device.
Figure 89 is the synoptic diagram of schematic configuration of the video generation device of expression embodiment 4-4, is the figure of the structure example of the expression image display system that used video generation device.
Figure 90 is the synoptic diagram that is used to illustrate based on the three-dimensional image display method of the embodiment 5-1 of the 5th embodiment, is the process flow diagram of an example of the whole processing procedure of expression.
Figure 91 is the synoptic diagram that is used to illustrate the three-dimensional image display method of embodiment 5-1, is the figure of an example of the establishing method on expression projecting plane.
Figure 92 is the synoptic diagram that is used to illustrate the three-dimensional image display method of embodiment 5-1, is the figure of an example of the establishing method on expression projecting plane.
Figure 93 is the synoptic diagram that is used to illustrate the three-dimensional image display method of embodiment 5-1, is the figure of the establishing method of explanation subpoint.
Figure 94 is the synoptic diagram that is used to illustrate the three-dimensional image display method of embodiment 5-1, is the process flow diagram of an example of processing procedure that expression is determined the colouring information of subpoint and had the step of probability.
Figure 95 is the synoptic diagram that is used to illustrate the three-dimensional image display method of embodiment 5-1, is the figure that there is method of determining probability in explanation.
Figure 96 is the synoptic diagram that is used to illustrate the three-dimensional image display method of embodiment 5-1, is the figure that there is method of determining probability in explanation.
Figure 97 is the synoptic diagram that is used to illustrate the three-dimensional image display method of embodiment 5-1, is the figure that there is method of determining probability in explanation.
Figure 98 is the synoptic diagram that is used to illustrate the three-dimensional image display method of embodiment 5-1, is the figure of the generation method of two dimensional image shown on each picture display face of explanation.
Figure 99 is the synoptic diagram that is used to illustrate the three-dimensional image display method of embodiment 5-1, is the figure of the generation method of two dimensional image shown on each picture display face of explanation.
Figure 100 is the synoptic diagram that is used to illustrate the three-dimensional image display method of embodiment 5-1, is the figure of the generation method of two dimensional image shown on each picture display face of explanation.
Figure 101 is the synoptic diagram that is used to illustrate the three-dimensional image display method of embodiment 5-2, is the figure of the relation between expression subpoint and the corresponding point.
Figure 102 is the synoptic diagram that is used to illustrate the three-dimensional image display method of embodiment 5-2, is the process flow diagram of an example that expression is determined the colouring information of subpoint and had the step of probability.
Figure 103 is the synoptic diagram that is used to illustrate the three-dimensional image display method of embodiment 5-2, is the figure that there is the method for obtaining of probability in explanation.
Figure 104 is the synoptic diagram that is used to illustrate the three-dimensional image display method of embodiment 5-2, is the figure that there is the method for obtaining of probability in explanation.
Symbol description
(first embodiment)
1,1A, 1B, 1C ... the virtual visual point image generating apparatus, 101 ... the virtual view determining unit, 102 ... the subject image is obtained the unit, 103 ... image generation unit, 103a ... the projecting plane determining unit, 103b ... the referenced viewpoints determining unit, 103c ... the texture array is guaranteed the unit, 103d ... the corresponding point matching processing unit, 103e ... the colouring information determining unit, 103f ... there is the probabilistic information determining unit, 103g ... rendering unit, 104 ... generate the image output unit, 2 ... the viewpoint position input block, 3 ... subject camera unit (video camera), 4 ... image-display units, 6 ... virtual visual point image, 7 ... the picture of subject, 7A ... the part of image deterioration, 7B ... the part that image is damaged.
(second embodiment)
6,6A, 6B, 6C ... video generation device, 601 ... the subject image is obtained the unit, 602 ... observer's viewpoint setup unit, 603 ... the projecting plane setup unit, 604 ... the unit is guaranteed in information stores zone, projecting plane, 605 ... colouring information/exist probability determining unit, 606 ... projecting plane information-display surface information translation unit, 607 ... the image output unit, 7,7A, 7B ... image-display units, 8,8A, 8B ... subject image photography unit, 9,9A, 9B ... the referenced viewpoints input block
(the 3rd embodiment)
2,2A, 2B, 2C ... video generation device, 201 ... the subject image is obtained the unit, 202 ... the virtual view setup unit, 203 ... setup units such as projecting plane, 204 ... the texture array is guaranteed the unit, 205 ... colouring information/exist probability determining unit, 206 ... rendering unit, 207 ... generate the image output unit, 3,3A, 3B ... subject image photography unit, 4,4A, 4B ... the view information input block, 5,5A, 5B ... image-display units, 6 ... polarization-type two-value optical system, 7,7A, 7B ... imageing sensor, 8 ... optical splitter, 9 ... polarizing filter, 10 ... zoom lens, 11a, 11b, 11c, 11d ... tight shot, 12 ... lens mount
(the 4th embodiment)
2,2A, 2B, 2C ... video generation device, 201 ... the subject image is obtained the unit, and 202 ... observer's viewpoint setup unit, 203 ... setup units such as projecting plane, 204 ... the texture array is guaranteed the unit, 205 ... colouring information/have probability determining unit, 206 ... projecting plane information-display surface information translation unit, 207 ... the image output unit, 208 ... rendering unit, 3,3A, 3B ... image-display units, 4,4A, 4B ... subject image photography unit, 5,5A, 5B ... the referenced viewpoints input block
(the 5th embodiment)
2,2A, 2B, 2C ... 3-dimensional image creation device, 201 ... the subject image is obtained the unit, 202 ... observer's viewpoint setup unit, 203 ... setup units such as projecting plane, 204 ... the texture array is guaranteed the unit, 205 ... colouring information/exist probability determining unit, 206 ... projecting plane information-display surface information translation unit, 207 ... the image output unit, 3,3A, 3B ... image-display units, 4,4A, 4B ... subject image photography unit, 5,5A, 5B ... the referenced viewpoints input block
Embodiment
Below, as the optimal way that is used to carry out an invention, first embodiment~the 5th embodiment is described.
[first embodiment]
At first, first embodiment of the present invention is described.First embodiment mainly is the embodiment corresponding with claim 1~claim 11.In the present embodiment, show and use red (R), green (G), blue (B) trichromatic example as the performance of colouring information, but also can be to use brightness (Y) or aberration (U, V) performance, and, under the situation of black white image, can also only use monochrome information as colouring information.In addition, at the figure that is used for illustrating first embodiment, be marked with same numeral for part with identical function.
Before each embodiment in explanation first embodiment, at first, the principle of the virtual visual point image generating method in first embodiment is described.
Fig. 2 to Fig. 5 is the synoptic diagram of principle that is used for illustrating the present invention's virtual visual point image generating method, Fig. 2 is the projecting plane group, video camera, referenced viewpoints, subpoint, the figure of an example of corresponding point, Fig. 3 (a) and Fig. 3 (b) are the figure of an example of the curve map of the degree of correlation of expression between the corresponding point, Fig. 4 (a) is the figure of an example of the expression hybrid processing corresponding with the transparency of subpoint, Fig. 4 (b) is the figure of an example of the hybrid processing of the performance colouring information corresponding with transparency in color space, and Fig. 5 represents subject, the projecting plane group, referenced viewpoints, virtual view, the figure of an example of subpoint.
The generation method of the virtual visual point image among the present invention has: step 1, and setting has the projecting plane group of sandwich construction; Step 2 is obtained the point (corresponding point) on the photographic images of a plurality of video cameras corresponding with each point (subpoint) on the projecting plane; Step 3, the colouring information by mixing a plurality of corresponding point or select one of them to determine the colouring information of subpoint; Step 4 is seen as a plurality of subpoints of coincidence for certain viewpoint (referenced viewpoints) from the space, based on the degree of correlation of corresponding point or its near zone, calculates the degree (having probabilistic information) that has the possibility of subject on the distance of each subpoint; Step 5 is carried out the hybrid processing corresponding with there being probabilistic information to the colouring information of the reference point that is seen as coincidence from virtual view, thereby is determined each the color of pixel information in the virtual view; And step 6, for repeating above-mentioned steps 1 to step 5 with the corresponding all points of the pixel of virtual visual point image.Promptly, not as existing means, will be in all cases and all parts obtain the geometric model accurately of subject, but with according to the shooting condition of subject or the difference at position, the estimated value that can't obtain having abundant reliability by distance estimations is a prerequisite, for the part that has obtained the low estimated value of reliability, draw faintly and reduce the contribution that image is generated, prevent extreme image deterioration, and, clearly draw and improve the contribution that image is generated for the part that has obtained the high range data of reliability.
Here, the reliability of estimation is differentiated as follows according to the degree of correlation (degree of correlation) of the corresponding point of captured image.For example, as shown in Figure 2, set the center C of referenced viewpoints R, video camera i(i=1,2 ..., N), projecting plane L parallel to each other j(j=1,2 ..., M), establish by center C iVideo camera to subpoint T jThe corresponding point of taking and obtaining are G Ij
So, for example, for being positioned at projecting plane L mOn certain subpoint T mObtained the set { G of corresponding point Im| i=1,2 ..., N} can also calculate their degree of correlation (degree of correlation).
Here, for be seen as a plurality of subpoint T that are on the straight line from referenced viewpoints R j(j=1,2 ..., M) calculate the degree of correlation, on the transverse axis between expression referenced viewpoints R and the projecting plane apart from l, on the longitudinal axis, during the expression degree of correlation, obtain the curve map shown in Fig. 3 (a) or Fig. 3 (b).For the concrete computing method of the degree of correlation, in the embodiment of back, narrate, it is big more to be illustrated as the big more then degree of correlation between the corresponding point of the degree of correlation here.
In the high distance of the degree of correlation, the corresponding point in a plurality of video cameras are similar and since in the position of subpoint photographs the possibility height of the identical point on the subject, therefore on this distance, exist the possibility of subject also high.And, when hypothesis exists the distance of subject to be one on the straight line by referenced viewpoints R, shown in Fig. 3 (a) and Fig. 3 (b), can be estimated as obtain the highest degree of correlation apart from l=1 *On have subject.
At this moment, shown in Fig. 3 (a), at distance l=1 *The degree of correlation of corresponding point compare under the very high situation with other candidate, can be estimated as the reliability height, but shown in Fig. 3 (b), as distance l=1 *With distance l=1 ' like that the candidate of estimated value have a plurality ofly, and be under the situation of same degree in the degree of correlation of these corresponding point, the reliability of estimation is low.
Under the situation shown in Fig. 3 (b), if adopt only describe one with the degree of correlation the highest apart from l=1 *The method of respective projection point then because in fact the mistake of estimating and subject be positioned under the situation apart from l=1 ', generates and occurs big noise in the image.
With respect to this, the possibility (having probabilistic information) that exists according to the relatedness computation subject in the present invention, draw a plurality of subpoints with the sharpness corresponding with there being probabilistic information, thereby under the low situation of the reliability of estimating, draw a plurality of subpoints faintly, the noise that generates image is outstanding, has to see the observer and to generate the effect of preferable image more.
On the other hand, under the high situation of the reliability of estimating,, therefore can generate more preferable image owing to exist the high subpoint of probabilistic information clearly to be drawn.
In addition, plotting method of the present invention can pass through to install simply as the texture of the basic skills of computer graphical, have and to handle well by the three-dimensional picture hardware that is equipped in the popular personal computer, thereby alleviate the effect that computing machine is loaded.
In addition, in virtual visual point image generating method of the present invention, each reference point on the projecting plane have have from penetrate into do not see through the level other transparency, by the probabilistic information that exists that obtains in the described step 4 is carried out the transparency that each reference point is calculated in conversion, in described step 5 from begin to be used for successively obtaining the hybrid processing of colouring information of the each point of virtual view from virtual view subpoint far away towards near subpoint, the colouring information that obtains in the hybrid processing till certain subpoint is according to the ratio corresponding with transparency, divides in the colouring information that obtains in the hybrid processing till the colouring information of this subpoint and the subpoint before it is carried out to obtain.At this moment, the colouring information that obtains by hybrid processing is the interior branch of the colouring information in a certain stage and colouring information thereafter.
Here, for example shown in Fig. 4 (a), consider in the color space represented, to set projecting plane L by following formula 1 j(j=1,2 ..., M), subpoint T j(j=1,2 ..., M), have a vector K of colouring information of the expression subpoint of red, green, blue (R, G, B) component j(j=1,2 ..., situation M).
[formula 1]
K j∈V,V≡{(R,G,B)|0≤R≤1,0≤G≤1,0≤B≤1}
In addition, the transparency of subpoint j(j=1,2 ..., M) be set at and become following formula 2.
[formula 2]
0≤α j≤1
At this moment, resulting colouring information D in the hybrid processing till the j=m mRepresent by following formula 3 and formula 4 such recursion formula, carried out to the colouring information D that is seen as the hybrid processing till the most forward j=M from virtual view MBecome the colouring information on the virtual view.
[formula 3]
D m=α mK m+(1-α m)D m-1
[formula 4]
D 1=α 1K 1
At this moment, according to the relation of above-mentioned formula 2 and formula 3, colouring information D mIn color space V K mAnd D M-1The internal point of division, therefore shown in Fig. 4 (b), as K m, D M-1∈ V is D then m∈ V.
So, when satisfying the condition of above-mentioned formula 1 and formula 2, for the colouring information D on the virtual view M, guaranteeing becomes as following formula 5.
[formula 5]
D M∈V
Above-mentioned formula 5 such assurances can be proved by mathematical induction, omit detailed explanation.
That is, the colouring information of subpoint and transparency are being set at when satisfying above-mentioned formula 1 and formula 2, the colouring information of virtual view necessarily can be included among the suitable color space V.
Because feature as above, in same subject, generate under the situation of a plurality of virtual visual point images, even calculate the colouring information and the transparence information of subpoint according to some referenced viewpoints, as long as this colouring information and transparency satisfy above-mentioned formula 1 and formula 2, just can in all virtual visual point images, in the scope of suitable colouring information, generate image.
Here, for example as shown in Figure 5, there is subject Obj, and setting two projecting plane L 1, L 2, referenced viewpoints R, virtual view P situation under, consider subpoint T 1, T 2, T 1', T 2' on colouring information be respectively K 1, K 2, K 1', K 2', the degree of the possibility that subject exists is β 1, β 2, β 1', β 2' situation.
On straight line, calculate the degree (having possibility information) of the possibility that described subject exists by referenced viewpoints R, on the subpoint on the same straight line have possibility information add up to 1, at subpoint T 1' and T 2Near have the surface of subject, so this some place exist possibility information to become to compare T 1And T 2' height.So, describedly exist possibility information to become following formula 6 and formula 7 is such.
[formula 6]
β 1 ≅ 0 , β 2 ≅ 1
[formula 7]
β 1 ′ ≅ 1 , β 2 ′ ≅ 0
At this moment, by the colouring information that is in the subpoint on the straight line PA is given the weight corresponding with there being possibility information and mutually the Calais calculate the colouring information K of some A of the image surface of virtual view P A, become following formula 8.
[formula 8]
K A=β 1′K 1′+β 2K 2
In addition, according to above-mentioned formula 6 and formula 7, above-mentioned formula 8 is expressed as following formula 9.
[formula 9]
Figure C20048001733300373
When virtual view P observes, because T A' by T ABlock, the original colouring information of therefore putting A is K A=K 2, but in above-mentioned formula 9, the brightness meeting of each component of (R, G, B) improves K 1'.
In addition, at K AAnd K A' each component have under the situation of big brightness K AThe scope that can surpass effective color space.Therefore, need cut out (clipping) and handle, so that bring in the scope of effective colouring information.
Therefore, for example, the computing method of narrating among the embodiment 1-2 by the back when existing probabilistic information to obtain transparency, are calculated as following formula 10 and formula 11.
[formula 10]
α 2=β 2,α 1=1
[formula 11]
α 2′=β 2′,α 1′=1
Wherein, in above-mentioned formula 10 and formula 11, α 1, α 2, α 1', α 2' be respectively T 1, T 2, T 1', T 2' transparency.
Here, in order to obtain the colouring information of the each point on the virtual view, from beginning to carry out hybrid processing successively towards near subpoint from virtual view subpoint far away, in the hybrid processing till certain subpoint resulting colouring information be divide in carrying out according to the ratio corresponding, to resulting colouring information in colouring information on this subpoint and the hybrid processing till subpoint before this with transparency and obtain the time, K AIt is such to become following formula 12.
[formula 12]
K A=α 2K 2+(1-α 21′K 1
At this moment, above-mentioned formula 12 becomes following formula 13 according to above-mentioned formula 6, formula 7, formula 10, formula 11, becomes the well approximate of original colouring information.
[formula 13]
Figure C20048001733300381
As mentioned above, in directly using the image generation that has probabilistic information, though it is no problem under the referenced viewpoints situation identical with virtual view, but near the occlusion area in subject under both different situations, produce the increase of brightness sometimes, with respect to this, in the image that will exist probabilistic information to be converted to transparency generates, has the effect that prevents this phenomenon.
In addition, in directly using the image generation that has probabilistic information, under the referenced viewpoints situation different with virtual view, when calculating colouring information, can not guarantee to bring in the effective colouring information scope by the computing of having used the mathematical expression shown in the back, for example, need treatment for correcting, with respect to this, in the image that is converted to transparency generates, do not need such correction.
In addition, in the image that will exist probabilistic information to be converted to transparency generates, have the subject that also can show semi-transparent mistake effectively, can use effect of the present invention widely for the more subject in the real world.
In addition, in virtual visual point image generating method of the present invention, in step 1, set each video camera intrinsic projecting plane group, in step 3, the colouring information of subpoint uses the colouring information of the corresponding point of the intrinsic resulting photographic images of video camera in the projecting plane under the subpoint, the probabilistic information that exists in the step 4 calculates as referenced viewpoints with the viewpoint of the intrinsic video camera in the projecting plane under the subpoint, and the hybrid processing of the colouring information of the virtual view in the step 5 concerns according to the position between virtual view and each referenced viewpoints proofreaies and correct.Like this, because and the position relation between the video camera irrespectively set each video camera intrinsic projecting plane group, even therefore the configuration of video camera is complicated or irregular, the setting of projecting plane group is handled also unaffected, can carry out image by consistent disposal route and generate.
In addition, set described each video camera under the situation of intrinsic projecting plane group, about the colouring information on projecting plane, do not need the hybrid processing between the image that corresponding video camera takes.Therefore, for example, when handling, can handle side by side, can realize the high speed that image generates by computing machine (computer).
In addition, all identical with the colouring information of the corresponding projecting plane group of identical video camera, therefore, and by computing machine (computer) when handling, texture storage device that can the share storage colouring information.Therefore, can not increase and consume too much storer, can alleviate the load that image generates employed device along with the quantity on projecting plane.
In addition, owing to determine the video camera corresponding uniquely, so the corresponding relation of coordinate that can be by preestablishing both, easily and carry out the calibrations such as correction of the distortion aberration of camera lens at high speed with certain projecting plane.
Thereby the program of carrying out the virtual visual point image generating method of first embodiment of the present invention in the device of special use, popular personal computer etc. is with a wide range of applications and high generality.
Below, enumerate embodiment the device of the virtual visual point image generating method of carrying out first embodiment and concrete image generating method are described.
(embodiment 1-1)
Fig. 6 and Fig. 7 are the synoptic diagram of schematic configuration of the virtual visual point image generating apparatus of expression embodiments of the invention 1-1, and Fig. 6 is the block scheme of the structure of presentation video generating apparatus inside, and Fig. 7 is the figure of the structure example of the expression system that uses video generation device.
In Fig. 6, the 1st, the virtual visual point image generating apparatus, the 101st, the virtual view determining unit, the 102nd, the subject image is obtained the unit, the 103rd, image generation unit, 103a is the projecting plane determining unit, and 103b is the referenced viewpoints determining unit, and 103c is that the texture array is guaranteed the unit, 103d is the corresponding point matching processing unit, 103e is the colouring information determining unit, and there is the probabilistic information determining unit in 103f, and 103g is a rendering unit, the 104th, generate the image output unit, the 2nd, viewpoint position input block, the 3rd, subject camera unit, the 4th, image-display units.In addition, in Fig. 7, User is the user of virtual visual point image generating apparatus, and Obj is a subject.
As Fig. 6 and shown in Figure 7, the virtual visual point image generating apparatus 1 of present embodiment 1-1 is made of following unit: virtual view determining unit 101, and it determines that user User uses the parameter of the viewpoint (virtual view) of viewpoint position input block 2 inputs; The subject image is obtained unit 102, and it is obtained by being positioned at a plurality of viewpoint position C iOn the image of the subject Obj that photographs of subject camera unit (video camera) 3; Image generation unit 103, it is based on the image of the described subject Obj that obtains, generates the image (virtual visual point image) when described virtual view is observed described subject Obj; And generating image output unit 104, it is used for showing the virtual visual point image that is generated by described image generation unit 103 at image-display units 4.
Determine for example position, direction, field angle by virtual view determining unit 101, as the parameter of described virtual view.In addition, described viewpoint position input block 2 for example as shown in Figure 7, can be the equipment that the user User of mouse etc. operates and selects, the user User that also can be keyboard etc. can also be the position detecting sensor that described user User wears as the direct equipment of numerical value input.In addition, also can provide, or provide by network by other program.
In addition, described subject image is obtained unit 102 can be at certain intervals, for example the position of the subject that changes is constantly obtained at the interval of 30Hz one by one, also can obtain the rest image of the subject of any time, can also obtain by from pen recorder, reading the subject image of taking in advance.In addition, preferably, be by realizing taking at synchronization synchronously between all video cameras, but change very slow and can be considered as under the situation of resting, be not limited thereto in the position of subject from the subject image of a plurality of viewpoint positions.
In addition, as shown in Figure 6, described image generation unit 103 is made of following unit: projecting plane determining unit 103a, and it determines that image generates the position/shape on employed projecting plane; Referenced viewpoints determining unit 103b, it determines the position of referenced viewpoints; The texture array is guaranteed unit 103c, and its array that will attach to the texture image on the projecting plane is assigned on the storer; Corresponding point matching processing unit 103d, it is obtained in the image of the subject that obtains unit 102 at described subject image, has taken the correspondence of part of the same area of subject between a plurality of viewpoint positions; Colouring information determining unit 103e, it carries out hybrid processing by the colouring information to a plurality of subject images of obtaining and determines to guarantee colouring information in the texture array that unit 103c guarantees by described texture array; There is probabilistic information determining unit 103f, it is based on the result of described corresponding point matching processing unit 103d, determines to be guaranteed in the texture array that unit 103c guarantees, had a degree (having probabilistic information) of the possibility of subject on the projecting plane by described texture array; And rendering unit 103g, there is the probabilistic information that exists that probabilistic information determining unit 103f determines in it based on the colouring information of being determined by described colouring information determining unit 103e and by described, and the described projecting plane of seeing from described virtual view is played up.
Guarantee that by described texture array the array that unit 103c guarantees keeps colouring information and has probabilistic information for each pixel, for example, respectively express with 8 (bit) for red (R), green (G), blue (B) three primary colors and the described probabilistic information that exists.But the present invention does not rely on such particular data table and reaches form.
In addition, described image-display units 4 for example is CRT (the Cathode Ray Tube that is connected with the generation image output unit 104 of display terminal etc., cathode-ray tube (CRT)), LCD (LiquidCrystal Display, LCD), the display device of PDP (Plasma Display Panel, plasma display) etc.Described image-display units 4 for example can be the display device of two dimensional surface shape, also can be as surrounding the curved display device of user User.In addition, if but the display device of use stereo display is as described image-display units 4, then determine two suitable virtual views of right and left eyes with described user User by described virtual view determining unit 101, generated from described two virtual visual point images that virtual view is looked by described image generation unit 103, the right and left eyes to the user presents independently image then.In addition, if generate the image of looking from the virtual view more than three, use can show the three dimensional display of the image that has the parallax more than three, then also can stereo-picture be shown to more than one user.
In addition, the system that uses described virtual visual point image generating apparatus 1 for example is a structure as shown in Figure 7, when user User specifies desirable viewpoint position/direction/field angle by 2 pairs of virtual visual point image generating apparatus of described viewpoint position input block 1, described virtual visual point image generating apparatus 1 is taken and obtains after its image by 3 couples of subject Obj of described subject camera unit (video camera), based on the image of the described subject that obtains, generate the image (virtual visual point image) of indicated viewpoint.The virtual visual point image of described generation illustrates by 4 couples of user User of described image-display units.
In addition, the system architecture of Fig. 7 is represented an example of the installation of the video generation device among the present invention, claim of the present invention not necessarily is limited to such structure, and the configuration of each device, mode, to be installed in the scope that does not break away from aim of the present invention be arbitrarily.
Below, illustrate that the image that is undertaken by described image generation unit 103 generates processing, but before recording and narrating its concrete processing procedure, the mathematical model as the prerequisite of handling is described.
Fig. 8 and Fig. 9 are the synoptic diagram of mathematical model that is used to illustrate the virtual visual point image generating method of the virtual visual point image generating apparatus that has used present embodiment 1-1, Fig. 8 is the figure of an example of expression projective transformation, and Fig. 9 is the figure of an example of denotation coordination conversion.
Image at the virtual visual point image generating apparatus that uses present embodiment 1-1 generates in the processing, for example as shown in Figure 8, has set the center C of video camera 3 i(i=1,2 ..., N) and virtual view P, projecting plane L j(j=1,2 ..., M).Below, in order to distinguish multiple cameras 3, the center C of described video camera iExpression video camera self, similarly, P represents virtual view self, and the position at the center of expression virtual view.
In addition, in Fig. 8, video camera C iBe configured to horizontal row, but the invention is not restricted to such configuration, for example also can be applied as two-dimensional square lattice shape or circular-arc etc. various configurations.And, projecting plane L jConfiguration also not necessarily be defined as parallelly, can embodiment 1-3 as hereinafter described be curved surface like that also.But, in the explanation of present embodiment 1-1, projecting plane L jBe the plane.
In the virtual visual point image generating method of present embodiment, for based in actual disposition the position C of video camera iThe image of the subject Obj that the place obtains generates the image of the virtual view P that does not dispose described video camera, passes through following step basically: will be by video camera C iThe part of the image of the subject of taking attaches (texture) to imaginary projecting plane L on the virtual visual point image generating apparatus 1 of computing machine etc. jOn, the image when handling the projecting plane that generates after described virtual view P observes this texture by coordinate Calculation.
When carrying out such processing, described virtual view P and video camera C iSpot projection in the three dimensions is become the point of the two dimension on each image surface.
In general, ((x, matrix y) are the matrixes of 3 row, 4 row to the point from the three dimensions, can be expressed as following formula 14 and formula 15 Z) to project into point on the image surface for X, Y.
[formula 14]
s x y 1 = Φ X Y Z 1
[formula 15]
Φ = φ 11 φ 12 φ 13 φ 14 φ 21 φ 22 φ 23 φ 24 φ 31 φ 32 φ 33 φ 34
At this moment, for example represent with the initial point to be the matrix Φ of perspective projection transformation of focal distance f at center 0Be following formula 16.
[formula 16]
Φ 0 = f 0 0 0 0 f 0 0 0 0 1 0
In addition, be so-called digital picture by the image of Computer Processing, show by the two-dimensional array on the storer.Represent that (u v) is called the digital picture coordinate system for the coordinate system of the position of this array.
At this moment, for example have on the digital picture of size of 640 pixels * 480 pixels a bit by get in 0 to 639 the round values any one variable u and any one the variable v that gets in 0 to 479 the round values represent, the colouring information of this point by red (R), green (G) at its place, address, blue (B) information are carried out 8 etc. quantification and data representation.
And this moment, the image coordinate shown in Fig. 9 (a) (x, y) and the digital picture coordinate shown in Fig. 9 (b) (u, v) one by one the correspondence, for example, have relation as following formula 17.
[formula 17]
u v 1 = k u - k u cot θ u 0 0 k v / sin θ v 0 0 0 1 x y 1
Here, the x axle shown in Fig. 9 (a) is parallel with the u axle shown in Fig. 9 (b), and the unit length of u axle and v axle is so that (x, y) coordinate is that benchmark is k u, k v, u axle and v axle angulation are θ.
In addition, when carrying out the writing and read of two-dimensional array, described digital picture coordinate (u, is v) quantized, but in the following description, if do not have special explanation then get successive value, and is carried out suitable discretize and handle when the visit array.
In addition, by this coordinate transform, except the relation of above-mentioned formula 17, can also carry out the conversion of the caused image fault of aberration of corrective lens.
Below, use described mathematical model that the generation processing procedure of concrete virtual visual point image is described.
Figure 10 to Figure 17 is used to illustrate that the virtual visual point image of present embodiment 1-1 generates the synoptic diagram of processing procedure, Figure 10 generates to handle whole process flow diagram, Figure 11 is the concrete process flow diagram that generates the step of virtual visual point image, Figure 12 is the figure of an example of the establishing method on expression projecting plane, Figure 13 is the expression subpoint, the subpoint string, the figure of an example of the set of subpoint string, Figure 14 be the expression be used for account for color information hybrid processing by referenced viewpoints, subpoint, the figure of an example of the angle that camera position constitutes, Figure 15 is the figure of an example of expression corresponding point matching processing, Figure 16 is used to illustrate the figure that plays up processing, and Figure 17 is the figure of an example of the virtual visual point image that generated of expression.
When the virtual visual point image generating apparatus 1 that uses present embodiment 1-1 generates virtual visual point image, as shown in figure 10,, determine the parameter (step 501) of virtual view P by described virtual view determining unit at first based on request from user User.In described step 501, for example determine the position, direction, field angle of virtual view P etc.
Then, obtaining unit 102 by described subject image obtains by described multiple cameras 3 (C i) image (step S502) of the subject Obj that takes.
Then, based on the image of obtaining the subject that obtains unit 102 by described subject image, generate the image (virtual visual point image) (step 503) when described virtual view P observes described subject Obj.
In described step 503, for example carry out the processing of each step as shown in figure 11, and generate virtual visual point image.
In the processing of described step 503, at first determine the projecting plane L of the employed sandwich construction of generation of virtual visual point image by described projecting plane determining unit 103a j(j ∈ J, J ≡ 1,2 ..., position M}), shape (step 503a).In described step 503a, determine described projecting plane L jThe time, for example, the projecting plane with flat shape shown in Figure 8 equally spaced be arranged in parallel.
In addition, equally spaced disposing described video camera C iThe time, also it can be provided with and at interval be made as B, focus of camera is made as F, the size of a pixel of image surface is made as δ, according to the sequence 1 of the distance of obtaining by following formula 18 d(d=1,2,3 ...) configuration plane (projecting plane).
[formula 18]
1 d = BF δd , ( d = 1,2,3 . . . )
Under these circumstances, the depth resolution of the corresponding point matching between the video camera and projecting plane be provided with at interval consistent.That is, as shown in figure 12, video camera C nAnd C N-1Be provided with interval B, with video camera C nImage surface on point be made as A, with video camera C N-1Image surface on be equivalent to C nThe point of some A be made as A 0', will be from A 0' point that plays d pixel is made as A d' time, the corresponding point { A that serves as reasons of some A 0' | d=1,2 ... the sequence that constitutes, the sequence of the distance that calculate this moment is provided by above-mentioned formula 18.
But, described projecting plane L jThe setting example be an example eventually, image generating method of the present invention is basically set two or more different projecting planes and got final product, and is unqualified for so specific projecting plane establishing method.
When the finishing dealing with of described step 503a, then determine point (referenced viewpoints) R (step 503b) as benchmark when (having probabilistic information) of degree that use, that calculate the possibility that on subpoint, has subject in the processing of back by described referenced viewpoints determining unit 103b.The position of described referenced viewpoints R can with the position consistency of virtual view P, have under a plurality of situations at virtual view, also can be taken as its centre of gravity place.But the present invention does not provide the method how to determine specific referenced viewpoints that depends on.
When the finishing dealing with of described step 503b, then on described projecting plane, set a plurality of subpoints (step 503c).At this moment, be set at subpoint and be on a plurality of straight lines by referenced viewpoints R, the subpoint on the same straight line is handled together as the subpoint string.Here, for example as shown in figure 13, be conceived to certain straight line, with projecting plane L by referenced viewpoints R jOn subpoint be made as T j, when the subpoint string of gathering them and obtaining is made as S, writing S={T j| j ∈ J}, and when the set with the subpoint string is made as ∑, be S ∈ ∑.
When the finishing dealing with of described step 503, then guarantee that by described texture array unit 103c will keep treating that texture guarantees in the array (texture array) of the image on the described projecting plane on the storer of video generation device (step 503d).At this moment, in the array of being guaranteed, for example have 8 colouring information (R, G, B) and have probabilistic information for each pixel, as the texture information corresponding with the position of described subpoint.
In addition, in described step 503d, also set the two-dimensional digital coordinate (U of the pixel of texture array j, V j) and subpoint T jThree-dimensional coordinate (X j, Y j, Z j) between corresponding relation.At this moment, for example, can be at all (U j, V j) value with (X j, Y j, Z j) value be set at table, also can be only at representational (U j, V j) setting (X j, Y j, Z j) value, remaining is corresponding obtains by interpolation processing (for example linear interpolation).
When the finishing dealing with of described step 503d,, determine with the corresponding color of pixel information of in described step 503d, guaranteeing of each subpoint and have possibility information based on the image of the subject that in described step 502, obtains.At this moment, scanning projection point string S successively in the scope of S ∈ ∑, and at T jScanning projection point T successively in the scope of ∈ S j, carry out dual circular treatment.
The subpoint string S that described circular treatment at first will be operated is initialized as starting position (step 503e).Then, the subpoint T that then in subpoint string S, will scan jBe initialized as the starting position, for example be made as j=1 (step 503f).
When the finishing dealing with of described step 503e and described step 503f, then obtain subpoint T jCoordinate (X j *, Y j *, Z j *), be positioned at (X by each video camera shooting j *, Y j *, Z j *) the point of position the time, that uses above-mentioned formula 14 to 17 concerns calculating chart image planes corresponding with which position (step 503g) respectively.At this moment, the video camera set of calculating described corresponding point is made as Ξ ≡ { C i | i ∈ I } 。Described video camera set
Figure C20048001733300463
Can be all video cameras, also can be according to virtual view P or referenced viewpoints R, subpoint T jThe position at random select one or more video cameras.
Here the corresponding point of each video camera that will obtain are made as G Ij(i ∈ I) is made as (u with its digital coordinates Ij *, v Ij *) (i ∈ I).
When the finishing dealing with of described step 503g, then by described colouring information determining unit 103e by mixing (u Ij *, v Ij *) colouring information in (i ∈ I), thereby determine and subpoint T jPixel (U on the corresponding texture array j *, V j *) colouring information (step 503h).Described hybrid processing is for example got the mean value of colouring information of the corresponding point of each video camera.
In addition, in described hybrid processing, for example, also can carry out with by video camera C i, subpoint T j, the angle θ that constitutes of referenced viewpoints R IjCorresponding weighting.Here, for example shown in Figure 14, consider the video camera set is made as Ξ = { C n , C n + 1 , C n + 2 } , ( I = { n , n + 1 , n + 2 } ) Situation.At this moment, will represent subpoint T jWith corresponding point G IjColouring information (vector B) is made as K respectively for R, G j, K IjThe time, for example, if determine K like that as shown in the formula 19 j, then from approaching to observe subpoint T from referenced viewpoints R jThe video camera taken of the angle of angle, big more to the percentage contribution of hybrid processing.
[formula 19]
K j = Σ i ∈ I cos θ ij · K ij Σ i ∈ I cos θ ij
When the finishing dealing with of described step 503h, calculate and subpoint T by described corresponding point matching processing unit 103d jThe corresponding point G of each corresponding video camera IjThe degree of correlation Q of (i ∈ I) j(step 503i).Described degree of correlation Q jFor example be following formula 20 like that the time, Q jGet on the occasion of, the degree of correlation of corresponding point is high more, Q jGet more little value.
[formula 20]
Q j = Σ i ∈ I ( K j - K ij ) 2
In addition, in above-mentioned formula 20, only locate the colouring information of compared projections point and corresponding point on one point, but also can carry out the comparison of colouring information by near a plurality of points subpoint and corresponding point.At this moment, for example as shown in figure 14, getting projecting plane T jNear regional Φ j, video camera C iCorresponding region Ψ IjThe time, the degree of correlation Q in these zones jFor example calculate by following formula 21.
[formula 21]
Q j = Σ i∈I Σ ( U j , V j ) ∈ Φ j ( u ij , v ij ) ∈ ψ ij { K ( U j , V j ) - K ( u ij , v ij ) } 2
Here, K (U j, V j) coordinate (U of expression texture array j, V j) in the estimated value of colouring information, K (u Ij, v Ij) expression video camera C iThe coordinate (u of photographic images j, v j) in colouring information.
In addition, the method for calculating degree of correlation is not limited to said method, and the present invention does not rely on specific computing method.For example, in the example depicted in fig. 15, will with subpoint T jAnd 8 zones that pixel constituted around the corresponding pixel of corresponding point and its are respectively as near zone Φ jAnd corresponding region Ψ IjBut, near zone Φ jAnd corresponding region Ψ IjDefinite method be not limited to this example.
After the finishing dealing with of described step 503i, then upgrade described subpoint T j(step 503j), and differentiate whether scanned all subpoint T j∈ S (step 503k).Here too,, then enter next step 503l,, then return described step 503g if also do not scanned if all scanned.
In described step 503k, differentiate for after all having scanned, follow by the described probabilistic information determining unit 103f that exists based on the degree of correlation Q that in described step 503i, calculates j, for all subpoint T that pass through on the straight line of referenced viewpoints R j(j ∈ J) determines to exist on the subpoint degree (the having probabilistic information) β of the possibility of subject j(step 503l).Wherein, the described probabilistic information β that exists jNeed satisfy the condition of following formula 22 and formula 23.
[formula 22]
0≤β j≤1
[formula 23]
Σ j = 1 M β j = 1
And, at subpoint T jThere is the high more value of getting more near 1 of the probability of subject in the place, therefore for the degree of correlation Q of subpoint that calculates among the described step 503i and corresponding point j, for example, implement to obtain the described probabilistic information β that exists by following formula 24 and formula 25 represented conversion process j(j ∈ J).
[formula 24]
β ~ j = 1 Q j
[formula 25]
β j = β j Σ j = 1 M β ~ j
Wherein, the described probabilistic information β that exists jAs long as satisfy the condition of above-mentioned formula 22 and formula 23, conversion process not necessarily is defined in the represented method of above-mentioned formula 24 and formula 25.
When the finishing dealing with of described step 503l, then upgrade subpoint string S (step 503m), and differentiate whether scanned all subpoint string S ∈ ∑s (step 503n).Here too,, then enter next step 503o,, then return described step 503f if also do not scanned if all scanned.
In described step 503n, differentiate for after all having scanned, follow by described rendering unit 103g according to the described probabilistic information β that exists j, describe and generate from virtual view P and observe projecting plane L with sandwich construction j(j=1,2 ..., image M) (step 503o).Here, for example as shown in figure 16, the coordinate representation that is made as the image surface of virtual view P is (u p, v p).At this moment, certain pixel p on the image surface *(u p *, v p *) colouring information K p *Be confirmed as connecting P and p *Straight line on subpoint string { T j *| the colouring information { K of j ∈ J} j *| j ∈ J} multiply by the pairing probabilistic information { β that exists j *| the result of j ∈ J} and addition is expressed as following formula 26.
[formula 26]
K P * = Σ j = 1 M β j * K j *
And, when all pixels on the image surface have been determined colouring information, can obtain the image of virtual view P.
In addition, in the above-mentioned formula 26 of replacement and as following formula 27 calculating Ks p *The time, even under the referenced viewpoints R situation different with the position of virtual view P, K p *Also certain assurance is included in the effective color space scope.
[formula 27]
K P * = Σ j = 1 M β j * K j * Σ j = 1 M β j *
In addition, here show pixel on the scintigram image planes and determine the process of colouring information, but be not limited thereto, for example also can submit the data of structure, the texture array on projecting plane, the setting of viewpoint P etc. to, entrust it to describe to handle general shape library such as OpenGL or DirectX.
More than, the generation of described virtual visual point image is handled (step 503) and is finished, and the virtual visual point image of generation is displayed on the described image-display units 4 (step 504).At this moment, in the described image-display units 4 shown virtual visual point image 6 for example as shown in Figure 17, in the picture 7 of subject, the described degree of correlation Q that in described step 503l, calculates jLow, be that the low part 7A of reliability of estimated value is described faintly, become unclear.Therefore,, seem that image slices is damaged for example unlike existing virtual visual point image 6 shown in Figure 1, and form the user eyes the deterioration of imperceptible degree.
Then, in described step 505, the continuation of judgment processing or end are if continue then from initial step 501 repetition, if finish then carry out end process.
As discussed above, virtual visual point image generating method according to the virtual visual point image generating apparatus that has used present embodiment 1-1, be not will be in all cases as existing means and all parts obtain the geometric model accurately of subject, but with shooting condition or position according to subject, the estimated value that can't obtain having abundant reliability by distance estimations is a prerequisite, describe faintly and reduce the contribution that image is generated for the part that has obtained the low estimated value of reliability, prevent extreme image deterioration, and clearly describe and improve the contribution that image is generated for the part that has obtained the high range data of reliability.Therefore, can make the image deterioration of the low part of computed reliability not remarkable, can form for the few virtual visual point image of user's deterioration.
In addition, in the virtual visual point image generating apparatus 1 of present embodiment 1-1, owing to utilize texture to generate virtual visual point image, thus can reduce the load that image generates the device in handling, and can generate virtual visual point image at high speed.
In addition, described virtual visual point image generating apparatus 1 needs not to be special-purpose device, for example, also can realize by computing machine and the program with memory storages such as CPU and storer, hard disks.In this case, make the program that can in computing machine, carry out each step as shown in figure 11, and when in described computing machine, carrying out, even popular personal computer, also can be easily and generate the few virtual visual point image of image deterioration at high speed.In addition, in this case, in memory storage, keep the data relevant, suitably read and handle by CPU with processing.
Described program can be recorded in floppy disk or the CD-ROM recording mediums such as (Compact Disc Read OnlyMemory, compact disc read-only memories) and provide, and also can provide by network.
In addition, the structure of the virtual visual point image generating apparatus that illustrates among the present embodiment 1-1 and the generation method of virtual visual point image or processing procedure are examples, and purport of the present invention is to determine the transparence information on the projecting plane that is made of multilayer according to the reliability of taking the corresponding region between the image that subject obtains from a plurality of different points of view position.Therefore, not being to break away from greatly in the scope of this purport, do not rely on particular treatment method or embodiment.
In addition, utilize the system of described virtual visual point image generating apparatus 1 also to be not limited to as shown in Figure 7 unidirectional system, also can use two-way system.
Figure 18 is the synoptic diagram of application examples of the system of the expression virtual visual point image generating apparatus 1 of having used described embodiment 1-1.
The virtual visual point image generating apparatus 1 of described embodiment 1-1 for example is suitable for systems such as videophone or video conference, for example as shown in figure 18, can be applied in the user UserA and UserB that are present in by communication network at a distance, be considered as both be the user be again subject, the other side's image is shown mutually, supports the system of face-to-face communication.At this moment, to be made as Img[A → B from the image of the UserB that looks of viewpoint of expection at UserA] time, Img[A → B] image of the UserB that photographs based on subject camera unit (video camera) 3B by the UserB side generates and illustrates on the image-display units 4A of UserA side.In addition, to be made as Img[B → A from the image of the UserA that looks of viewpoint of expection at UserB] time, Img[B → A] image of the UserA that photographs based on subject camera unit (video camera) 3A by the UserA side generates and illustrates on the image-display units 4B of UserB side.
In addition, in system shown in Figure 180, the viewpoint position input block that shows each User is made of data sending part 201A, 201B and data reception portion 202A, the 202B of the position sensor of the head that is worn on the user, and the head of automatically following the user moves and calculates the example of the virtual view of expection.But described viewpoint position input block needn't necessarily be adopted in such a way.And, also can estimate the position of head to have same function based on the user's who photographs by subject camera unit 3A, 3B image.
Here, can be Img[A → B] by the system architecture of the some generations among the virtual visual point image generating apparatus 1B of the virtual visual point image generating apparatus 1A of UserA side and UserB side.Under the former situation, the image of the UserB that is photographed by video camera 3B is transferred among the virtual visual point image generating apparatus 1A of UserA side by network 8, based on this, generate Img[A → B by described virtual visual point image generating apparatus 1A] and illustrate by image-display units 4A.And, in the latter case, the image of the UserB that photographs based on video camera 3B by the UserB side, virtual visual point image generating apparatus 1B by the UserB side generates Img[A → B], then virtual visual point image Img[A → B] be transferred among the virtual visual point image generating apparatus 1A of UserA side, and illustrate by image-display units 4A.In addition, though omitted explanation, for Img[B → A] also be the same.
In addition, each unit of image generation unit 103 in the pie graph 6 also can be by some the sharing of the virtual visual point image generating apparatus 1B of the virtual visual point image generating apparatus 1A of UserA side and UserB side, for example, in order to generate Img[A → B], can projecting plane determining unit 103a and referenced viewpoints determining unit 103b be installed at the video generation device 1A that is arranged in the UserA side, with corresponding point matching unit 103d, at the video generation device 1B that is arranged in the UserB side texture array is installed and guarantees unit 103c, colouring information determining unit 103e, there are probabilistic information determining unit 103f and rendering unit 103g.In addition, though omitted explanation, for Img[B → A] also be the same.
And, can be provided with and the described virtual visual point image generating apparatus 1A of UserA side and UserB side, the video generation device 1C that 1B is different in any place on network 8, all or part of of image generation unit is installed.
In addition, narrated the communication between two users of UserA and UserB here, but user's number is not limited thereto, between more user, can using the present invention too.At this moment, the employed Virtual Space of imagination communication when being different from the in esse real space of user, and when other user's corresponding with its position relation image was shown mutually, the user can have the sensation that seems the Virtual Space (cyber space) on community network.
(embodiment 1-2)
Figure 19 is the synoptic diagram that is used to illustrate the virtual visual point image generating method of embodiment 1-2, Figure 19 (a) is the process flow diagram of processing that expression becomes the feature of present embodiment 1-2, and Figure 19 (b) is the process flow diagram of an example of the concrete processing procedure of the expression step of determining transparence information.
In present embodiment 1-2, show in the generation processing of the virtual visual point image that in described embodiment 1-1, illustrates, the probabilistic information that exists that replaces the subpoint that calculates among the described step 503l, and exist probabilistic information to be transformed to transparence information to carry out the example that image generates with described.
At this moment, for the structure or the bulk treatment process of described virtual visual point image generating apparatus 1, can adopt with described embodiment 1-1 in the same mode of example that illustrates, therefore different parts only is described below.
In described embodiment 1-1, in the step 503 that generates described image, as shown in figure 11, use the described probabilistic information β that exists that in described step 503l, determines jGenerate virtual visual point image, but in present embodiment 1-2, shown in Figure 19 (a), after described step 503l, the described step 503p that exists probabilistic information to determine transparency of additional transformation.
Therefore, in the step 503d that guarantees the texture array of described embodiment 1-1, guaranteed to keep colouring information and the described array that has probabilistic information, with respect to this, in the step 503d of embodiment 1-2, guarantee to keep the array of colouring information and transparence information.
Based on the described probabilistic information β that exists jCalculate described transparence information α j, the same with the step 503l of described embodiment 1-1, in present embodiment 1-2, the described probabilistic information that exists of interim calculating in described step 503l, and in next procedure 503p, calculate transparence information.
In addition, in the step 503o that plays up processing of present embodiment 1-2, above-mentioned formula 26 or formula 27 that replacement illustrates in described embodiment 1-1, and come to calculate one by one D according to above-mentioned formula 2 to above-mentioned formula 4 jTherefore, certain pixel P on the image surface *(u p *, v p *) colouring information K j *As following formula 28, calculate.
[formula 28]
K P * = D M
= α M K M + ( 1 - α M ) α M - 1 K M - 1 + · · ·
+ ( 1 - α M ) ( 1 - α M - 1 ) · · · ( 1 - α 2 ) α 1 K 1
More than be the image generating method in the present embodiment, illustrate below based on the described probabilistic information β that exists jCalculate transparence information α jAn example of method.
At first, when more above-mentioned formula 26 and above-mentioned formula 28, shown in following formula 29.
[formula 29]
β M = α M β j = { Π m = j + 1 M ( 1 - α m ) } α j ( j ∈ J )
According to this relation, press j=M, M-1 ..., 1 order is obtained α jProcess as follows.
At first, shown in Figure 19 (b), be made as j=M (step 5031p) as the initial value of j.Then, be defined as α according to above-mentioned formula 29 MM(step 5032p).Then, the value with j is updated to j=j-1 (step 5033p).
Then, differentiate α J+1Whether be 1 (step 5034p).At this moment, if α J+1≠ 1, then, determine α by following formula 30 according to the relation of above-mentioned formula 29 j(step 5035p).
[formula 30]
α j = 1 Π m = j + 1 M ( 1 - α m ) β j
On the other hand, work as α J+1, determine α at=1 o'clock by following formula 31 j(step 5036p).
[formula 31]
α j=1
Its basis is described, at first, if α J+1=1, then shown in following formula 32, in above-mentioned formula 30 because denominator is 0 (zero) and can't calculating.
[formula 32]
Π m = j + 1 M ( 1 - α m ) = 0
Therefore, when launching above-mentioned formula 32, become following formula 33, when the above-mentioned formula 29 of substitution, become as shown in Equation 34.
[formula 33]
α M+(1-α MM-1+…(1-α M)(1-α M-1)…(1-α j+2j+1=1
[formula 34]
β MM-1+…+β j+1=1
Obtain following formula 35 according to above-mentioned formula 34 and above-mentioned formula 22 and formula 23.
[formula 35]
β j=0
Here, with above-mentioned formula 32 and the above-mentioned formula 29 of above-mentioned formula 35 substitutions following the time, become 0=0 * α j, α as can be known jDesirable value arbitrarily.Therefore, in present embodiment 1-2, for example, be set at α j=1.
Wherein, as mentioned above, α jCan be set at value arbitrarily, the present invention does not rely on specific α jDefinite method.
Then, differentiate and whether handle j=1 (step 5037p), all finishing dealing with just finishes afterwards, if also do not finish, then turns back to above-mentioned steps 5033p.
As discussed above, the same according to the virtual visual point image generating method of present embodiment 1-2 with described embodiment 1-1, can be easily and the inapparent virtual visual point image of generating portion image deterioration at high speed.
In addition, what illustrate among the embodiment 1-1 as described is such, in directly using the image generation that has probabilistic information, under the referenced viewpoints situation different with virtual view, sometimes near the occlusion area of subject, produce the increase of brightness, with respect to this, as present embodiment 1-2, in the image that will exist probabilistic information to be transformed to transparency generates, the effect that prevents this phenomenon is arranged.Therefore, can obtain that image deterioration is few, the virtual visual point image of more approaching actual subject.
In addition, what illustrate among the embodiment 1-1 as described is such, in directly using the image generation that has probabilistic information, under the referenced viewpoints situation different, when calculating colouring information, do not include the assurance in the effective colouring information scope in by the computing of having used the formula shown in the back with virtual view, for example, need treatment for correcting, with respect to this, as present embodiment 1-2, in the described image that exists probabilistic information to be transformed to transparency is generated, do not need such correction.Therefore, can simplified image generate processing.
In addition, as the virtual visual point image generating method of present embodiment 1-2, in the image that will exist probabilistic information to be transformed to transparency generates, have the subject that can show semi-transparent mistake effectively, can use effect of the present invention widely for the more subject in the real world.
In addition, the virtual visual point image generating method that illustrates among the present embodiment 1-2 is an example, and purport of the present invention is to exist probabilistic information to be transformed to transparence information and generate virtual visual point image described.Therefore, in the scope that does not break away from this purport greatly, do not rely on specific computing method or processing procedure.
In addition, above-mentioned colouring information is equivalent to monochrome information under the situation of black white image, can equally handle.
(embodiment 1-3)
Figure 20 is the synoptic diagram that is used to illustrate the virtual visual point image generating method of embodiment 1-3, is the figure of an example of expression projecting plane group, referenced viewpoints, virtual view, subpoint.
In present embodiment 1-3, will illustrate not to be in a plurality of video cameras, to use common projection screen L j, but set each video camera C jThe method that image generates is carried out on intrinsic projecting plane.In addition, it is the same that the structure of virtual visual point image generating apparatus 1 and whole image generate the process that illustrates among processing procedure and the described embodiment 1-1, so omit detailed explanation.
At first, what illustrate among the embodiment 1-1 as described is such, determines virtual view in described step 501, obtains the image of subject in the described step 502 of the next one.
In the image generating method of present embodiment 1-3, in the step 503 on definite projecting plane of the step 503 of the described virtual visual point image of generation that then carries out, set the intrinsic projecting plane group of each described video camera.
At this moment, for example as shown in figure 20, set intrinsic separately projecting plane group Λ i≡ { L Ij| j ∈ J} makes described projecting plane group and video camera C i(i ∈ I, I={n-1, n, n+1, image surface Img n+2}) i(i ∈ I) is parallel.
After the setting of described projecting plane group is finished, then in the processing of the referenced viewpoints of determining described step 503b, with described projecting plane group Λ iIntrinsic referenced viewpoints R iBe set to viewpoint C with video camera iOn the identical position.
After described step 503b finishes, then carry out the processing of described step 503c by the process that illustrates among the described embodiment 1-1.Then, in next procedure 503d, on the projecting plane, will carry out contrary projection, set up corresponding with each pixel of the texture array on projecting plane by each pixel of video camera shot digital images.
Here, (u, v) (x, conversion y) is for example illustrated by above-mentioned formula 17 point to the image surface, from (x, y) (contrary projection Z) is formulism as follows for example for X, Y for the point on the projecting plane in three dimensions from the point of digital picture.
Generally (x, in the time of y), (X, Y Z) exist numerously in three-dimensional, and (X, Y be against projection image Z) but wherein be positioned at point on the projecting plane to satisfy the point of above-mentioned formula 14 and above-mentioned formula 15 at the point that two dimension is provided.
The formula on projecting plane generally shows as aX+bY+cZ+d=0, if be rewritten as vector performance, it is such then to become following formula 36.
[formula 36]
a b c d X Y Z 1 = 0
Here, sum up above-mentioned formula 14 and above-mentioned formula 15 and above-mentioned formula 36, then obtain following formula 37.
[formula 37]
s X Y 1 0 = φ 11 φ 12 φ 13 φ 14 φ 21 φ 22 φ 23 φ 24 φ 31 φ 32 φ 33 φ 34 a b c d X Y Z 1
Therefore, for (X, Y Z) find the solution above-mentioned formula 37, then can obtain from (x is y) to (X, Y, contrary projection image Z).Here, for example, if the matrix of 4 row 4 row of above-mentioned formula 37 has inverse matrix, when being made as s '=1/s, described contrary projection image obtains by following formula 38.
[formula 38]
s ′ X Y 1 0 = φ 11 φ 12 φ 13 φ 14 φ 21 φ 22 φ 23 φ 24 φ 31 φ 32 φ 33 φ 34 a b c d - 1 X Y 1 0
In addition, above-mentioned example is an example eventually, also can carry out the such calibration of aberration (for example, distortion aberration) of corrective lens, can be used as table keep with the point of digital picture (u, v) the point on the Dui Ying projecting plane (X, Y, Z).
Then, undertaken from described step 503e to the processing the described step 503g by the process that illustrates among the described embodiment 1-1.Then, in the processing of definite colouring information of next step 503h, only use by video camera C iThe colouring information of the image of taking is determined described projecting plane group Λ iOn subpoint.
Can by as step 503d as described in illustrating among the present embodiment 1-3 and as described in carry out the step 503h, with shot by camera to digital picture directly be used as the colouring information of the texture array on projecting plane.
Then, once more from described step 503i to described step 503n, handle by the process identical with described embodiment 1-1.Then, then in the rendering unit of described step 503o, carry out the hybrid processing of colouring information for all subpoints that are seen as coincidence from virtual view P.At this moment, for example in example shown in Figure 20, for projecting plane group Λ nAnd Λ N+1On subpoint, carry out the hybrid processing of colouring information on by the straight line of virtual view P.
Here, with projecting plane L IjOn subpoint be made as T Ij, with T IjColouring information be made as K Ij, will exist possibility information to be made as β Ij, the colouring information as the image surface of the virtual view P shown in the above-mentioned formula 27 in described embodiment 1-1 is for example determined as follows.
That is certain pixel p on the image surface, *(u p *, v p *) colouring information K p *Be confirmed as to connect P and p *Straight line on certain subpoint string { T Ij *| i ∈ I, the colouring information { K of j ∈ J} Ij *| i ∈ I, the pairing possibility information { β that exists of j ∈ J} Ij *| i ∈ I, j ∈ J} become following formula 39 as the weighted mean value of coefficient.
[formula 39]
K P * = Σ i ∈ I Σ j = 1 M β ij * K j * Σ j = 1 M β ij
As discussed above, the same according to the virtual visual point image generating method of present embodiment 1-3 with described embodiment 1-1, can be easily and the inapparent virtual visual point image of generating portion image deterioration at high speed.
In addition, as present embodiment 1-3, if and the position between video camera relation irrespectively set each video camera intrinsic projecting plane group, even then the configuration of video camera is complicated or irregular, the setting of projecting plane group is handled also unaffected, can carry out image by consistent disposal route and generate.
In addition, the virtual visual point image generating method that illustrates in present embodiment 1-3 is an example, and purport of the present invention is to exist probabilistic information to be transformed to transparence information and generate virtual visual point image described.Therefore, in the scope that does not break away from this purport greatly, do not rely on specific computing method or processing procedure.
In addition, above-mentioned colouring information is equivalent to monochrome information under the situation of black white image, can similarly handle.
(effect of first embodiment)
According to the method that illustrates in first embodiment, be not will be in all cases as existing method and all parts obtain the geometric model accurately of subject, but with shooting condition or position according to subject, the estimated value that can't obtain having abundant reliability by distance estimations is a prerequisite, describe faintly and reduce the contribution that image is generated for the part that has obtained the low estimated value of reliability, prevent extreme image deterioration, and clearly describe and improve the contribution that image is generated for the part that has obtained the high range data of reliability.Therefore, the deterioration of the image of the part that computed reliability is low is not remarkable.
In addition, can solve under the referenced viewpoints situation different, near the occlusion area of subject, produce the problem that brightness increases with virtual view.In addition, under the referenced viewpoints situation different, can't guarantee to bring in the effective colouring information scope when calculating colouring information with virtual view, thus need treatment for correcting sometimes, but in the method that illustrates in the present embodiment, do not need such correction.In addition, also can show translucent subject effectively, have and to use effect of the present invention widely to the more subject that exists in the real world.In addition, if and the position between video camera relation irrespectively set each video camera intrinsic projecting plane group, even then the configuration of video camera is complicated or irregular, the setting of projecting plane group is handled also unaffected, can carry out image by consistent disposal route and generate.
And, under the situation of having set the intrinsic projecting plane group of each described video camera,, do not need the hybrid processing between the image that corresponding video camera takes about the colouring information on projecting plane.Therefore, for example, when handling, can handle side by side, can realize the high speed that image generates by computing machine.
In addition, the colouring information of setting up corresponding projecting plane group with same video camera is all identical, therefore, when handling by computing machine, texture storage device that can the share storage colouring information.Therefore, can not increase and consume too much storer, can alleviate the load that image generates employed device along with the quantity on projecting plane.And, owing to determine the video camera corresponding uniquely, so can be, easily and carry out the calibrations such as correction of the distortion aberration of camera lens at high speed by the corresponding relation between the coordinate that preestablishes both with certain projecting plane.
In addition, can shorten based on a plurality of subject images generate virtual visual point images device processing time or alleviate the load that is applied on the device, even popular personal computer also can be with the few image of generating portion deterioration at short notice.
[second embodiment]
Below, second embodiment of the present invention is described.Second embodiment mainly is the embodiment corresponding to claim 12~claim 21.Basic structure in second embodiment is the same with first embodiment, but in second embodiment, it is characterized in that, prepares many group shooting units, has probability based on the relatedness computation of obtaining at each shooting unit.In addition, at the figure that is used for illustrating second embodiment, the part with identical function is marked with same numeral.
The image generating method of second embodiment is the 3D shape that obtains the object that reflects in the described image according to the different a plurality of images of viewpoint, generates the method for the image of the three-dimensional image that described object is shown or the image when any viewpoint is observed described object.At this moment, use the method for texture, set the projecting plane of sandwich construction, the distance of the each point on estimating from described observer's viewpoint to object surfaces, thus obtain the 3D shape of described object.Estimate described object surfaces apart from the time, for example, be seen as point on each projecting plane of coincidence (below be called subpoint) for viewpoint, obtain the degree of correlation between the point on each image corresponding (below be called corresponding point) with described subpoint from described observer.Then, be seen as the height of the degree of correlation of each subpoint of coincidence according to viewpoint from described observer, estimate described object surfaces be present in which subpoint in the subpoint of described coincidence near.In the image generating method of present embodiment, be not consider described object surfaces be present in from described observer's viewpoint be seen as which subpoint a plurality of subpoints of coincidence near, but consider near described each subpoint to have object surfaces with the big or small corresponding ratio of the degree of correlation of described each subpoint.At this moment, in image generating method of the present invention, according to determining described each subpoint corresponding to the degree of correlation of described each subpoint or having the probability (below be called have probability) of object surfaces near it.Then, when generating image, when the colouring information to the each point on the image of described generation distributes the colouring information of described subpoint, distribute according to the ratio corresponding with the described height that has a probability based on the 3D shape of described subject.Like this,, be painted into the part of estimation difficulty of the distance of described body surface faintly, make discontinuous noise etc. not remarkable from observing the observer on described projecting plane.
In addition, for from described observer's viewpoint or the referenced viewpoints that is used to obtain the 3D shape of object be seen as a plurality of subpoints of coincidence, if can suppose the probability density distribution that has probability of body surface to a certain degree, then also can use the parametric function p (l) of reflection probability density distribution to obtain the described probability that exists.In this case, can reduce the deviation of the caused degree of correlation of influence of the noise (noise) on the image of shooting, and prevent to exist reliability low of probability.
And, obtain the degree of correlation of certain subpoint if not using corresponding point on all images, but when obtaining according to the corresponding point on the image of taking from predetermined several viewpoints, for example, by removing because contain (blocking) even and the image that on described subpoint, exists object surfaces also not reflect to come, improve the reliability of the height of the described degree of correlation, also improved the described reliability that has probability.
Figure 21 to Figure 28 is the synoptic diagram of principle that is used to illustrate the method for displaying image of present embodiment, Figure 21 is the figure of notion of the generation method of the shown image of explanation, Figure 22 is the figure that shows Figure 21 two-dimensionally, Figure 23 (a) and Figure 23 (b) are the figure of the method for obtaining of the degree of correlation of explanation corresponding point, Figure 24 (a) and Figure 24 (b) are the figure of the problem points the when degree of correlation of obtaining between the corresponding point is described, Figure 25 is the figure that explanation solves the method for the problem when obtaining the degree of correlation, Figure 26 (a) and Figure 26 (b) are the figure that an example of the method that improves the precision that has probability is described, Figure 27 (a), Figure 27 (b) and Figure 28 are the figure of the feature of explanation present embodiment.
In the method for displaying image of present embodiment, when generating the image that shows, at first in video generation devices such as computing machine, set virtual three dimensions, on described three dimensions, set the viewpoint C of the video camera of the image of taking described object i(i=1,2 ..., N) and the projecting plane L of sandwich construction that estimates the 3D shape of object j(j=1,2 ..., M).At this moment, if described video camera is arranged on a certain straight line, then described viewpoint C iFor example and as shown in Figure 22, be set on the X-axis (Z=0) as Figure 21.In addition, described projecting plane L jFor example and as shown in Figure 22, will be set at Z=l with the parallel plane plane of XY as Figure 21 j(<0).
At this moment, as Figure 21 and shown in Figure 22, if the straight line lp that draws at viewpoint P and certain projecting plane L from the observer mIntersection point (subpoint) T mOn have object surfaces, then this point should be reflected as by being arranged on described viewpoint C iOn the image taken of video camera on point (corresponding point) G I, mSimilarly, if at described subpoint T mOn have object surfaces, then this point should be by being arranged on described some C I+1, C I+2On the image taken of video camera on be reflected as corresponding point G respectively I+1, m, G I+2, mTherefore, if know described subpoint T mCorresponding point G on pairing described each image I, m, G I+1, m, G I+2, mThe degree of correlation (similar degree), then can estimate described subpoint T mOr whether there is object surfaces near it.Then, if each the subpoint T on the straight line lp that the viewpoint P from described observer is drawn jCarry out such estimation, can estimate then on the straight line lp that the viewpoint P from described observer draws, which subpoint T is object surfaces be positioned at j(projecting plane L j) near.
On the straight line lp that the viewpoint P from described observer draws, estimate which subpoint T is object surfaces be positioned at j(projecting plane L j) neighbouring the time, for example use and described subpoint T jEach corresponding corresponding point G I, jDegree of correlation Q jDescribed degree of correlation Q jFor example the same with first embodiment, use following formula 40 to obtain.
[formula 40]
Q j = Σ i ∈ I ( K j - K ij ) 2
Here, I is the viewpoint C of each video camera i(i=1,2 ..., can on image surface, define described subpoint T in N) jPairing corresponding point G I, jViewpoint C iSet, K IjBe corresponding point G I, jColouring information, K jBe subpoint T jColouring information, be made as described corresponding point G I, jColouring information K IjMean value.
At this moment, consider the projecting plane L setting jFor example become the such situation of Figure 23 (a) during the surface configuration of last coincidence actual object.At this moment, the subpoint T on the straight line lp that draws from described observer's viewpoint P jIn from viewpoint C from described video camera i, C I+1, C I+2The nearest subpoint of the body surface of seeing is subpoint T mTherefore, shown in Figure 23 (a), with described subpoint T mCorresponding corresponding point G I, m, G I+1, m, G I+2, mOn respectively the point on the body surface of reflection be in very approaching position relation.On the other hand, for example, projecting plane T 2Point on the body surface that reflects respectively on the pairing corresponding point is in the position relation of leaving mutually.Consequently, use above-mentioned formula 40 to obtain each subpoint T on the described straight line lp jDegree of correlation Q jThe time, shown in Figure 23 (b), described subpoint T is only arranged mDegree of correlation Q mBecome very little value.Therefore, when described observer's viewpoint P observes the direction of described straight line lp, can be estimated as object surfaces and be positioned at described subpoint T m, promptly be set with described projecting plane L mApart from l mThe position on.
Therefore, draw straight line lp to all directions, to the subpoint T on each straight line lp from described observer's viewpoint P jWhen repeating same processing, can estimate the object surfaces shape that reflects on the described image.
But, shown in Figure 23 (a) and Figure 23 (b), such method of estimation effectively, be that the high situation of reliability of estimated object surfaces shape only is the actual fairly simple situation of object surfaces shape, complex-shaped at object, or a plurality of objects are seen as under the situation of coincidence from observer's viewpoint P, and the reliability of estimated object surfaces shape reduces.
Here, as the example that the reliability of estimated object surfaces shape reduces, for example consider the projecting plane L that is seen as coincidences, makes two object surfaces shapes and set from observer's viewpoint P at two objects jDuring coincidence, form the situation as Figure 24 (a).At this moment, the subpoint T on the straight line lp that considers the viewpoint P from the observer that is represented by dotted lines among Figure 24 (a) is drawn jObtain degree of correlation Q jThe time form for example distribution shown in Figure 23 (b).Therefore, consider near described straight line lp the reliability height of the surface configuration of estimated object A.
But, for example, the subpoint T ' on the straight line lp ' that the viewpoint P from the observer that represents with solid line in to Figure 24 (a) draws mObtain degree of correlation Q ' mSituation under, from viewpoint C iThe corresponding point G ' of the image of taking I, mOn reflected the surface of object B, from described viewpoint C I+1, m, C I+2, mThe corresponding point G ' of the image of taking I+1, m, G ' I+2, mOn reflected the surface of object A.Under these circumstances, the degree of correlation Q ' that obtains by above-mentioned formula 40 mIncrease.Consequently, the subpoint T ' on the described straight line lp ' jDegree of correlation Q ' jDistribution become shown in Figure 24 (b) like that, be difficult to estimate at which subpoint T jNear have object surfaces.At this moment, the same with the situation of distribution shown in Figure 23 (b), be estimated as at degree of correlation Q ' jMinimum subpoint T ' jNear when having object surfaces, under the situation of this misjudgment, on shown image, show discontinuous noise.
Therefore, in the method for displaying image of present embodiment, for example, do not carry out at described degree of correlation Q jMinimum subpoint T jOr have the estimation of object surfaces near it, and consider at each subpoint T jGo up with described degree of correlation Q jThe probability of magnitude proportion correspondence have object surfaces.At this moment, if will be at described subpoint T jOr probability (having probability) of existence surface is made as β near it j, then the subpoint on the straight line lp that draws from described observer's viewpoint P, promptly be seen as the subpoint T of coincidence from described observer's viewpoint P jHave a probability β jThe essential condition that satisfies as following formula 41 and formula 42.
[formula 41]
0≤β j≤1
[formula 42]
Σ j = 1 M β j = 1
And, at subpoint T jIn, the subpoint that the probability of existence surface is high more, the described probability β that exists jGet more value, therefore for described each subpoint T near 1 jDegree of correlation Q j, for example, carry out determining described each subpoint T by the conversion process of following formula 43 and formula 44 expressions jHave a probability β j
[formula 43]
β ~ j = 1 Q j
[formula 44]
β j = β j Σ j = 1 M β ~ j
Wherein, the described probabilistic information β that exists jAs long as satisfy the condition of above-mentioned formula 41 and formula 42.Therefore, the described probability β that exists jAlso can determine with the conversion process method in addition that above-mentioned formula 43 and formula 44 are represented.
By such processing, if determined at described each subpoint T jOr there is the probability β of object surfaces near it j, then for example such as shown in figure 25, each the subpoint T on certain the straight line lp that has determined to draw from described observer's viewpoint P jPairing colouring information K jAnd there is a probability β j
Use the 3D shape of the object of estimating like this, for example under the situation of the three-dimensional image that shows described object on the DFD of intensification modulation type, with colouring information K j, the described probability β that exists jCorresponding brightness shows in a plurality of display surfaces and described each subpoint T jCorresponding pixel.Like this, about each the subpoint T on the described straight line lp jDegree of correlation Q j, for example such shown in Figure 23 (b), certain subpoint T is only being arranged mDegree of correlation Q mValue compare with other relevance degree under the significantly different situations, this subpoint T is only arranged mHave a probability β mGet big value.Therefore, only have and described subpoint T mThe brightness of corresponding pixel increases, and is observing described projecting plane L from described observer's viewpoint P jThe observer it seems very clear.
On the other hand, at each subpoint T of described straight line lp jDegree of correlation Q jFor example such shown in Figure 24 (b), be difficult to estimate at which subpoint T jNear exist under the situation of object surfaces, a plurality of subpoints that have probability of same degree that are appear.Therefore, the brightness with same degree has shown and a plurality of projecting plane L jOn subpoint T jCorresponding pixel is observed described projecting plane L at the viewpoint P from described observer jThe observer it seems that distance perspective is fuzzy.But, because a plurality of subpoints that are seen as coincidence at the viewpoint P from the observer show the object surfaces pictures, so can not become the discontinuous noise that produces owing to misjudgment to the distance of body surface.Therefore, even do not obtain the 3D shape accurately of shown object, still may be displayed on the observer and it seems than natural object dimensional picture.
In addition, show in the 3D shape of using the object of estimating by said process under the situation of the two dimensional image (visual point image arbitrarily) when viewpoint is observed described object arbitrarily, for example, as long as will be according to the described probability β that exists jRatio mixed each subpoint T on the straight line lp that draws from described observer's viewpoint P jColouring information K jAfter colouring information get final product as the colouring information of the intersection point between the image surface of described straight line lp and shown image.
In addition, at definite described probability β that exists jThe time, if can then can pass through described each the subpoint T that determines by above-mentioned formula 43 and formula 44 at the probability density distribution of supposing the probability that object surfaces exists to a certain degree down jHave a probability β jCarry out statistical treatment, reduce at described each viewpoint C based on the distribution of shapes of described object iThe caused by noise evaluated error of the image that the place photographs.Here, for distinguish carry out before the described statistical treatment have probability and carried out handling after have a probability, will carry out having probability, promptly have a probability β before the described statistical treatment according to what above-mentioned formula 43 and formula 44 were obtained jAs metewand value v jThen, will be to described metewand value v jThe value of carrying out obtaining after the statistical treatment is as there being probability β j
At metewand value v to obtaining according to above-mentioned formula 43 and formula 44 jIn the statistical treatment of carrying out, at first for example such shown in Figure 26 (a), to described metewand value v jDistribution give the probability density distribution that has probability of object, obtain the distribution function p (l) that has probability.At this moment, for example during Normal Distribution (Gaussian distribution), the described distribution function p (l) of probability that exists can be expressed as following formula 45 to described probability density distribution.
[formula 45]
p ( l ) = 1 2 π σ exp { - ( 1 - μ ) 2 2 σ 2 }
Here, μ is a mean value, and σ is the parameter of expression variance, uses described metewand value v j, can as following formula 46 and formula 47, estimate.
[formula 46]
μ = Σ j = 1 M β j l j
[formula 47]
σ 2 = Σ j = 1 M ( β j l j - μ ) 2
After obtaining the distribution function p (l) that has probability like this, for example, use following formula 48 to determine the described probability β that exists j
[formula 48]
β j = ∫ l j - l j + p ( l ) dl
Here, shown in Figure 26 (b), l j -And l j +Be to be considered as at distance l jThe projecting plane L at place jLast existence surface apart from lower limit and higher limit, for example, provide by following formula 49 and formula 50.
[formula 49]
l j - = l j - 1 + l j 2 , l j - = - ∞
[formula 50]
l j + = l j + l j + 1 2 , l M + = ∞
Based on the described probability v that exists that uses above-mentioned formula 45 to the relation between the formula 50 to obtain j, on described DFD, show the three-dimensional image of object, or show the two dimensional image of seeing from viewpoint arbitrarily, thus can show reduced original image, promptly from described viewpoint C iThe image of the The noise of the image of taking described object and obtaining.
In addition, for example, shown in Figure 24 (a) and Figure 24 (b), containing owing to object
(blocking) and with certain subpoint T jCorresponding corresponding point G I, jIn reflect under the situation of the object different with other corresponding point, think if obtain degree of correlation Q after getting rid of these corresponding point j, then can carry out reliability than higher estimation.Here, if the situation of consideration as Figure 24 (a), then in the method for explanation before this, at the subpoint T ' that obtains on the straight line lp ' that draws from described observer's viewpoint P mDegree of correlation Q ' mThe time, also use the corresponding point G on the surface of reflecting the surperficial rather than described object A that object B is arranged I, mObtain.Therefore, the degree of correlation Q ' that obtains according to above-mentioned formula 40 mIncrease, be difficult to estimate to go up the distance of existence surface at described straight line lp '.
Therefore, for example shown in Figure 27 (a), get rid of the corresponding point G on the surface of the described object B of reflection I, mAfter obtain described subpoint T mDegree of correlation Q mAt this moment, in the example shown in Figure 27 (a), use and described subpoint T mCorresponding corresponding point G I+1, m, G I+2, mObtain degree of correlation Q mLike this, because at described corresponding point G I+1, m, G I+2, mOn reflected the point very approaching with the surface of described object A, so obtaining degree of correlation Q ' according to above-mentioned formula 40 jThe time, shown in Figure 27 (b), become the little subpoint T ' of the degree of correlation than other subpoint mDegree of correlation Q ' mDistribution.Therefore, can reduce the influence of containing, estimate 3D shape near the actual object surface configuration.
And this moment, set several groups and be used to obtain described phase Q jCorresponding point G I, j(viewpoint C i) combination , for all combinations
Figure C20048001733300663
Situation obtain each subpoint T on the straight line lp that draws from described observer's viewpoint P jDegree of correlation Q j, comprehensively its result obtains the final probability that exists.Usually, with corresponding point G I, j(viewpoint C i) combination be made as
Figure C20048001733300664
(h ∈ H) will use described each combination The distribution function that has probability on the described straight line lp that obtains is made as p h(l), will be according to described each distribution function p h(l) probability of obtaining that exists is made as β J, hThe time, the comprehensive probability β that exists jCan obtain according to following formula 51.
[formula 51]
β j = Σ h ∈ H β j , h Σ h ∈ H
In addition, described subpoint T jColouring information K jFor example can be according at each described combination The colouring information K that obtains J, hAnd there is a probability β J, h, for example use following formula 52 to determine.
[formula 52]
K j = Σ h ∈ H β j , h K j , h Σ h ∈ H β j , h
Like this, for example as shown in figure 28, with certain combination
Figure C20048001733300674
The computed reliability of obtaining to the distance of body surface is high and show the distribution function p of peak value clearly h(l) be difficult to be subjected to combination with other
Figure C20048001733300675
The low distribution function p of computed reliability of the distance of obtaining that arrives body surface h(l) influence.Therefore, as a whole, the computed reliability of the distance of the each point on from described observer's viewpoint to shown object improves.
(embodiment 2-1)
Figure 29 to Figure 34 is the synoptic diagram that is used to illustrate the image generating method of embodiments of the invention 2-1, Figure 29 is the process flow diagram of an example of the whole processing procedure of expression, Figure 30 is the process flow diagram of an example of processing procedure that expression is determined the colouring information of the subpoint among Figure 29 and had the step of probability, Figure 31 is the process flow diagram that an example of the step that has probability among Figure 30 is determined in expression, Figure 32 is the figure of the setting example of expression shooting unit, Figure 33, Figure 34 (a) and Figure 34 (b) are that explanation is the figure of the method for the information on the display surface with the information conversion on the projecting plane.
The image generating method of present embodiment 2-1 is to use the 3D shape that obtains the object that reflects from the image of a plurality of viewpoints shootings described image, and, for example generate the method that has the two dimensional image that shows on each picture display face of image display device of a plurality of picture display faces at DFD like that based on the 3D shape of obtained described object.
For example as shown in figure 29, described image generating method has: obtain from viewpoint C iThe step 1 of the image of the object of taking; Set the step 2 of observer's viewpoint P; Obtain the step 3 of the 3D shape of described object; With the colouring information of the point (subpoint) on the projecting plane of the obtained 3D shape of performance and to have probability transformation be the colouring information of the point (display dot) on the picture display face and have probability, thereby be created on the step 4 of the two dimensional image that shows on the described picture display face; And according to colouring information, have the step 5 of the display dot on corresponding brightness of probability or the transparency display image display surface.
In addition, for example as shown in figure 29, described step 3 has: the projecting plane L that sets sandwich construction jStep 301; Be identified for obtaining the step 302 of referenced viewpoints of the 3D shape of object; Setting is seen as each projecting plane L of coincidence from described referenced viewpoints jOn subpoint T jThe subpoint string that group constituted and with each subpoint T of described subpoint string jCorresponding point G on corresponding each obtained image I, jStep 303; Be identified for obtaining described each subpoint T jDegree of correlation Q jC iCombination (below be called shooting unit)
Figure C20048001733300681
Step 304; Guarantee to store described subpoint T jColouring information and the step 305 that has the array of probability; And definite described subpoint T jColouring information and the step 306 that has probability.
In addition, for example as shown in figure 30, described step 306 has: the step 30601 of the described subpoint string of initialization; The described set of cameras of initialization
Figure C20048001733300682
And the step 30602 of polled data; Subpoint T on the described subpoint string of initialization jStep 30603; Determine described subpoint T jThe step 30604 of colouring information; Use and described subpoint T jCorresponding corresponding point G I, jIn be included in described shooting unit
Figure C20048001733300683
Combination in corresponding point calculate degree of correlation Q J, hStep 30605; For all the subpoint T on the subpoint string that becomes process object jRepeat the step 30606 of the processing of described step 30604 and step 30605; To at described shooting unit
Figure C20048001733300684
Each degree of correlation Q that obtains J, hThe step 30607 of voting; Upgrade the shooting unit
Figure C20048001733300685
, and carry out the step 30608 of described step 30604 repeatedly to the processing of step 30607 for all shooting units; Based on the degree of correlation Q that in described step 30607, has carried out ballot J, h, determine described each subpoint T jThe step 30609 that has probability; And upgrade the subpoint string, and repeat the step 30610 of described step 30602 to the processing of step 30609 for all subpoint strings.
And as shown in figure 31, described step 30609 has: initialization shooting unit
Figure C20048001733300686
Step
30609a; According to using described shooting unit
Figure C20048001733300687
The degree of correlation Q that obtains J, hCalculate metewand value v J, hStep 30609b; By described metewand value v J, hStatistical treatment determine to exist the distribution function p of probability h(l) step 30609c; Determine described each subpoint T according to the described distribution function of probability that exists jHave a probability β J, hStep 30609d; Upgrade described shooting unit
Figure C20048001733300688
, and repeat the step 30609e of described step 30609b to the processing of step 30609d; And comprehensively for the described unit of respectively making a video recording
Figure C20048001733300689
That obtains exists probability β J, hThereby, determine described each subpoint T jHave a probability β jStep 30609f.
During the two dimensional image that on the image generating method that uses present embodiment 2-1 generates each picture display face at described DFD for example, shows, at first obtain by being arranged on a plurality of different points of view C i(i=1,2 ..., N) video camera on is taken object and the image (step 1) that obtains.At this moment, described viewpoint C iBe made as photographic images video camera the position is set, for example such as shown in figure 21, be made as and be arranged in one-dimensionally on a certain straight line.And this moment, the viewpoint C of described video camera iBe not limited on a certain straight line, also can arranging one-dimensionally on a plurality of straight lines or on the curve.And, also can not be that the two-dimensional lattice shape is arranged one-dimensionally but in the plane or on the curved surface.At this moment, obtained image can be a coloured image, also can be black white image, but in present embodiment 2-1, be made as the coloured image of obtaining the each point (pixel) on red to use (R), green (G), blue (B) trichromatic colouring information represent images and describe.
Then, set the observer's viewpoint P (step 2) that observes the three-dimensional image (image) of objects displayed on DFD in the Virtual Space on the video generation device of computing machine etc.
Then, obtain to generating the 3D shape (step 3) of the object that described image uses.In described step 3, at first on described Virtual Space, set the projecting plane L of the 3D shape (surface configuration) that is used to estimate described object j(step 301).At this moment, for example as shown in figure 21, described projecting plane L jBe set at parallel plane plane with XY.And this moment, described projecting plane L jSetting at interval for example can be consistent with the interval of the picture display face of the DFD of display image, also can be inconsistent.
Then, be identified for obtaining the referenced viewpoints (step 302) of the 3D shape of described object.Described referenced viewpoints for example can be observer's a viewpoint, also can be defined as the point arbitrarily on the three dimensions beyond observer's the viewpoint.
Then, setting is seen as each projecting plane L of coincidence from described observer's viewpoint P or referenced viewpoints jOn subpoint T jThe subpoint string that group constituted and with described each subpoint T jCorresponding point G on the corresponding described image of obtaining I, j(step 303).
At this moment, described subpoint T jBy the point (X on described Virtual Space (three dimensions) j, Y j, Z j) expression, if from viewpoint C iConsider the xy coordinate system of two dimension on the image surface of the image of taking, then described corresponding point G I, jCoordinate representation be (x I, j, Y I, j).At this moment, by with described subpoint (X j, Y j, Z j) project to from described viewpoint C iOn the image surface of the image of taking and obtain described corresponding point G I, jCoordinate (x I, j, y I, j).This projection is as long as use the transformation matrix of general 3 row, 4 row of explanation in the first embodiment.
In addition, under the situation of using the video generation device as computing machine, the image of processing is so-called digital picture, shows as the two-dimensional array on the storer of device.Below, the coordinate system of position of the described array of expression is called the digital picture coordinate system, its location tables be shown (u, v).Here, for example when considering the digital picture of horizontal 640 pixels, vertical 480 pixels, each locations of pixels on the described digital picture by get in 0 to 639 the round values any one variable u and any one the variable v that gets in 0 to 479 the round values represent.And the data that the colouring information of this point obtains by red (R), green (G) at place, this address, blue (B) information being carried out 8 etc. quantification provide.
At this moment, the described corresponding point G in the San Wei Virtual Space I, jCoordinate (x I, j, y I, j) and described digital picture coordinate system (u v) for corresponding one by one, for example, has as shown in the formula 53 such relations.
[formula 53]
u v 1 = k u - k u cot θ u 0 0 k v / sin θ v 0 0 0 1 x y 1
In addition, above-mentioned formula 53 for example is taken as the u axle of digital picture coordinate system parallel with X-axis.In above-mentioned formula 53, k u, k vRespectively with on the Virtual Space (x, y) coordinate is the u axle of described digital picture coordinate system of benchmark and the unit length of v axle, θ is the angle that the u axle is become with the v axle.
Therefore, in described step 303, carry out described subpoint T jCoordinate (X j, Y j, Z j) and described digital picture coordinate (u Ij, v Ij) correspondence.This correspondence for example can be for all (u Ij, v Ij) with (X j, Y j, Z j) value as the table provide, also can be only to representational (u Ij, v Ij) setting (X j, Y j, Z j) value, and remaining point is for example obtained by the interpolation processing of linear interpolation etc.
In addition, in described digital picture coordinate system, (u, v) quantize, but in the following description, successive value is then got in short of special explanation, when the described two-dimensional array of visit, carried out suitable discretize and handle.
When described step 303 reason is finished, then determine to obtain degree of correlation Q jThe time video camera viewpoint C that uses iCombination (shooting unit)
Figure C20048001733300702
(step 304).Here, for example shown in figure 32, viewpoint C iIf be that video camera is set to 3 * 3 trellis, then described shooting unit
Figure C20048001733300703
For example be defined as Ξ 1 = { C 1 , C 2 , C 3 , C 5 } Ξ 2 = { C 3 , C 5 , C 6 , C 9 } Ξ 3 = { C 5 , C 7 , C 8 , C 9 } Ξ 4 = { C 1 , C 4 , C 5 , C 7 } Four.
In addition, described shooting unit
Figure C20048001733300716
Definite method be arbitrarily, in example shown in Figure 32, do not only limit to
Figure C20048001733300717
,
Figure C20048001733300718
,
Figure C20048001733300719
, , also can prepare other shooting unit.In addition, at this moment, can be according to described video camera (viewpoint C i) the situation that is provided with prepare described shooting unit in advance
Figure C200480017333007111
, also can specify by the observer.
When the finishing dealing with of described step 304, then, for example on the storer (storage unit) of described video generation device, guarantee to store described subpoint T jColouring information K jAnd there is probability β in object jThe array (step 305) of information.At this moment, about the array of canned data, guaranteed such array: described subpoint T jEach information K j, β jFor example have 8 for example red (R), green (G), blue (B) colouring information and object and have probability.
When the finishing dealing with of described step 305, then use described a plurality of images of obtaining to determine described each subpoint T jColouring information and object have probability (step 306).In described step 306, for example use the shooting unit of appointment for certain subpoint string
Figure C200480017333007112
Obtain each the subpoint T on the described subpoint string jColouring information K J, hAnd degree of correlation Q J, hProcessing, and carry out for all camera groups Repeat the processing of this processing.Then, repeat this processing for all subpoint strings.
Therefore, such at first as shown in figure 30 in described step 306, initialization subpoint string (step 30601).
Then, initialization shooting unit
Figure C200480017333007114
And the polled data (step 30602) of the degree of correlation.
Then, be initialized to and be the subpoint T on the subpoint string of locating elephant j, for example be made as j=1 (step 30603).
Then, according to selected shooting unit
Figure C200480017333007115
In the corresponding point G that comprised I, jColouring information determine described subpoint T jColouring information K J, h(step 30604).At this moment, described subpoint T jColouring information K J, hFor example be made as described shooting unit In the corresponding point G that comprised I, jColouring information K I, jMean value.
Then, calculate described subpoint T jShooting unit with described selection
Figure C200480017333007117
In the corresponding point G that comprised I, jDegree of correlation Q J, h(30605).At this moment, for example use above-mentioned formula 40 to calculate described degree of correlation Q J, h
Then, upgrade subpoint T j, and judge whether to have carried out the processing (step 30606) of described step 30604 and step 30605 for all subpoints on the subpoint string that becomes process object.Here, if the subpoint that does not carry out the processing of described step 30604 and step 30605 in addition then returns described step 30604, and re-treatment.
After all subpoints on the subpoint string that becomes process object have been carried out the processing of described step 30604 and step 30605, to its result, promptly according to selected unit
Figure C20048001733300721
In the corresponding point G that comprised I, jThe colouring information K that obtains J, hAnd degree of correlation Q J, hVote (step 30607).
When the finishing dealing with of described step 30607, then upgrade described shooting unit , and judge whether to exist also and the subpoint string that becomes process object is not carried out the shooting unit (step 30608) of described step 30604 to the processing of step 30607.Here, do not carry out the shooting unit of described step 30604, then return described step 30603 to the processing of step 30607 if exist, and re-treatment.
For the subpoint string that becomes process object, to all shooting units Carried out described step 30604 to the processing of step 30607, according to the colouring information K that in described step 30607, has carried out ballot J, hAnd degree of correlation Q J, hDetermine subpoint T jColouring information K jAnd there is a probability β j(step 30609).
In described step 30609, for example such as shown in figure 31, the unit of at first just making a video recording
Figure C20048001733300724
(step 30609a).
Then, according to using the shooting unit
Figure C20048001733300725
Each the subpoint T that calculates jDegree of correlation Q J, hCalculated example such as metewand value v J, h(step 30609b).For example by obtaining described metewand value v with the conversion process of above-mentioned formula 43 and formula 44 expressions J, h
Then, carry out described metewand value v J, hStatistical treatment, and obtain and use described shooting unit
Figure C20048001733300726
Situation under the distribution function p that has probability h(l) (step 30609c).For example use above-mentioned formula 45, formula 46, formula 47 to obtain described distribution function p h(l).
Then, according to using described shooting unit
Figure C20048001733300727
Situation under the distribution function p that has probability h(l) determine to use described shooting unit
Figure C20048001733300728
The time at each subpoint T jProbability (the having probability) β of last existence surface J, h(step 30609d).For example use above-mentioned formula 48, formula 49, formula 50 to obtain the described probability β that exists J, h
Then, upgrade described shooting unit , and judge whether to exist also and the subpoint string that becomes process object is not carried out the shooting unit of described step 30609b to the processing of step 30609d
Figure C20048001733300732
(step 30609e).Here, do not carry out the shooting unit of described step 30609b if exist to the processing of step 30609d
Figure C20048001733300733
, then return described step 30609b, and re-treatment.
The subpoint string that becomes process object has been carried out described step 30609b to the processing of step 30609d, and comprehensively this result is determined described subpoint T jColouring information K jAnd there is a probability β j(step 30609f).At this moment, for example use above-mentioned formula 52 to obtain described colouring information K jIn addition, for example use above-mentioned formula 51 to obtain the described probability β that exists j
If the processing of described step 30609f finishes, the processing of then described step 30609 finishes.Then, then upgrade the subpoint string, and judge whether to exist and also do not carry out the subpoint string (step 30610) of described step 30602 to the processing of step 30609.Here, also do not carry out the subpoint string of described step 30602, then return step 30602 and re-treatment to the processing of step 30609 if exist.
Carried out described step 30602 to the processing of step 30609 for all subpoint strings, (processing of step 3) finishes described step 306, and can obtain the 3D shape of described object.
The processing of described step 3 finishes and obtains after the 3D shape of described object, then based on the 3D shape of described object, is created on the two dimensional image that shows on each picture display face of described DFD.When generating described two dimensional image, for example such as shown in figure 33, on the Virtual Space of the shape that has obtained described object, set the image generation face LD that is used to generate described two dimensional image r(r=1,2 ..., R).
Here, at first, for example as shown in figure 33, consider described projecting plane L jSetting number and at interval and described image generation face LD rQuantity and at interval consistent situation.In this case, described image generates face LD rDisplay dot A rColouring information KD hAnd there is a probability γ rSo long as with described display dot A rConsistent subpoint T jColouring information K jAnd there is a probability β jGet final product.
In addition, described projecting plane L jSetting at interval needn't generate face LD with described image rSetting at interval consistent, also needn't make described projecting plane L jSetting number and described image generate face LD rSetting number unanimity.That is, according to described projecting plane L jEstablishing method, described projecting plane L is for example also arranged shown in Figure 34 (a) like that jBe provided with at interval and described image generation face LD rThe at interval inconsistent situation that is provided with.Under these circumstances, obtain the straight line l that draws from described observer's viewpoint P by following processes rGenerate face LD with described each image rIntersection point (display dot) A rColouring information KD rAnd there is a probability γ r
At first, described each display dot A rColouring information KD rBe defined as, for example, as the subpoint T on the described straight line lp j, and this display dot A r(image generates face LD r) become the mean value of colouring information K of the subpoint T of nearest display dot (image generation face).At this moment, described display dot A rColouring information KD rCan not mean value also, and be made as from described display dot A rThe colouring information K of nearest subpoint T.
On the other hand, described each display dot A rHave a probability γ rBe made as and add and make this display dot A r(image generates face LD r) become nearest display dot (image generation face) subpoint T have a value behind the probability β.At this moment, if will make certain image generate face LD rBecome the projecting plane L that nearest image generates face jSet be made as { L j| j ∈ R}, then described image generates face LD rOn display dot A rHave a probability γ hCan use described each projecting plane L jSubpoint T jHave a probability β j, provide by following formula 54.
[formula 54]
γ h = Σ j ∈ J β j
Here, when considering the situation shown in Figure 34 (a), described image generates face LD 1That become nearest image generation face is projecting plane L 1, L 2, L 3Therefore, described display dot A rColouring information KD rFor example be made as subpoint T 1, T 2, T 3Colouring information K 1, K 2, K 3Mean value.And, described display dot A rHave a probability γ rBe made as described subpoint T 1, T 2, T 3Have a probability β 1, β 2, β 3Sum.Similarly, described image generates face LD 2On display dot A 2Colouring information KD 2For example be made as subpoint T 4, T 5Colouring information K 4, K 5Mean value.And, described display dot A 2Have a probability γ 2Be made as described subpoint T 4, T 5Have a probability β 4, β 5Sum.
In addition, for example shown in Figure 34 (b), generate face D at described image rBe provided with at interval and described projecting plane L jSetting at interval different and generate face LD two continuous images 1, LD 2Between set two projecting plane L 1, L 2Situation under, described each image generates face LD 1, LD 2Display dot A 1, A 2Have a probability γ 1, γ 2Can be according to from described subpoint T jGenerate face LD to each image 1, LD 2The ratio of distance distribute described projecting plane L jSubpoint T jHave a probability β jAt this moment, usually, generate face LD at described image 1, LD 2Between with a plurality of projecting plane L jSet be made as { L j| during j ∈ J}, described image generates face LD rOn display dot A rHave a probability γ rCan use described each subpoint T jHave a probability β j, provide by following formula 55.
[formula 55]
γ h = Σ j ∈ J w j , r β j
Here, w J, rBe expression projecting plane L jImage is generated face LD rThe coefficient of percentage contribution.
Here, for example shown in Figure 34 (b), consider to generate face LD at two images 1, LD 2Between set projecting plane L 1, L 2Situation.At this moment, if from projecting plane L 1Generate face LD to described each image 1, LD 2Distance be respectively B 1, B 2, then described projecting plane L 1Described each image is generated face LD 1, LD 2Percentage contribution w 1,1, w 1,2Provide by following formula 56 respectively.
[formula 56]
w 1,1 = B 2 B 1 + B 2 , w 1,2 = B 1 B 1 + B 2
Similarly, if from projecting plane L 2Generate face LD to described each image 1, LD 2Distance be respectively B 3, B 4, then described projecting plane L 2Described each image is generated face LD 1, LD 2Percentage contribution W 2,1, W 2,2Provide by following formula 57 respectively.
[formula 57]
w 2,1 = B 4 B 3 + B 4 , w 2,2 = B 3 B 3 + B 4
Consequently, described image generates face LD 1Display dot A 1Have a probability γ 1And described image generates face LD 2Display dot A 2Have a probability γ 2Respectively as shown in the formula shown in 58.
[formula 58]
γ 1=w 1,1β 1+w 2,1β 2,γ 2=w 1,2β 1+w 2,2β 2
Can carry out the processing of described step 4 by process as above, obtain the two dimensional image that on each picture display face of described DFD, shows.Then, according to point (the pixel) (step 5) on each picture display face of the colouring information A demonstration DFD that described each image is generated the each point distribution on the face LD.At this moment, if described DFD is the intensification modulation type, then as long as according to the described probability γ that exists rCorresponding brightness shows that described each image generates face LD rEach display dot A rColouring information KD rGet final product.And, be under the situation of infiltration type at described DFD, for example as long as with each display dot A rPermeability be set at and the described probability γ that exists rCorresponding transparency shows and gets final product.
Like this, carrying out step 1 shown in Figure 29 to the processing of step 5 and show under the situation of three-dimensional image of described DFD object, as as described in illustrate in the principle, even do not obtain the 3D shape accurately of object, also may be displayed on the observer and it seems the three-dimensional image of nature.
Figure 35 to Figure 37 is the synoptic diagram that the schematic configuration of the device of image generating method of present embodiment 2-1 and system has been used in expression, Figure 35 is the block scheme of the structure example of presentation video generating apparatus, Figure 36 is the figure of the structure example of the expression image display system that uses video generation device, and Figure 37 is the figure of another structure example of the expression image display system that uses video generation device.
In Figure 35, the 6th, video generation device, the 601st, the subject image is obtained the unit, and the 602nd, the referenced viewpoints setup unit, the 603rd, the projecting plane setup unit, the 604th, the unit is guaranteed in information stores zone, projecting plane, the 605th, and colouring information/have probability determining unit, the 606th, projecting plane information-display surface information conversion unit, the 607th, the image output unit, the 7th, image-display units (DFD), the 8th, subject image photography unit, the 9th, observer's viewpoint input block.
For example as shown in figure 35, the video generation device 6 of having used the image generating method of present embodiment 2-1 has: the subject image is obtained unit 601, and it obtains the different a plurality of subject images of shooting condition; Observer's viewpoint setup unit 602, it sets the image observation person's of observation post's generation viewpoint; Setup units such as projecting plane 603, its setting are used to determine to exist projecting plane, subpoint string, corresponding point, shooting unit of probability etc.; Unit 604 is guaranteed in information stores zone, projecting plane, and it guarantees to store the colouring information of the point (subpoint) on the projecting plane and the zone that has probability; Colouring information/exist probability determining unit 605, the probability (having probability) that it is determined the colouring information of described subpoint and have object on described subpoint; Projecting plane information-display surface information conversion unit 606, it is with the colouring information of described subpoint and exist the information conversion of probability to be the colouring information of display surface and to have probability; And image output unit 607.At this moment, the image-display units 7 that for example has the display surface of a plurality of coincidences like that by DFD shows from the image of described image output unit 607 outputs.
In addition, described subject image is obtained the image that the subject (object) of being taken by subject image photography unit (video camera) 8 is obtained in unit 601.In addition, obtained image can directly be obtained the image of being taken by described subject image photography unit 8, also can learn recording medium from the magnetic, electrical, optical that write down the image of being taken by described subject image photography unit 8 and obtain indirectly.
In addition, described observer's viewpoint setup unit 602 for example uses the information of image condition input blocks such as mouse or keyboard 9 input based on the observer, that sets distance from described observer's viewpoint P to described image-display units 7 or sight line etc. generates face LD with image rBetween relative position relation.And described image condition input block 9 also can be posture or the sight line that detects described observer, thereby imports the unit of the information corresponding with this posture or sight line.
In addition, for example as shown in figure 22, setup units such as described projecting plane 603 are set from viewpoint (video camera) C iDistance be l j(j=1,2 ..., parallel projecting plane L M) jAnd, in setup units such as described projecting plane 603, also set each projecting plane L that the observer's viewpoint P that sets from described observer's viewpoint setup unit 602 is seen as coincidence jOn subpoint T jThe subpoint string that group constituted, and with described subpoint T jCorresponding point G on each corresponding image I, jAnd, in setup units such as described projecting plane 603, also can set shooting unit this moment based on the condition of importing by described condition entry unit 9
Figure C20048001733300771
In addition, information stores zone, described projecting plane guarantees that unit 604 for example guarantees to store each the subpoint T on each projecting plane on the set storer in device jColouring information K jAnd there is a probability β jThe zone.
In addition, described colouring information/exist probability determining unit 605 based on foregoing principle, according to described projecting plane T jCorresponding point G on the corresponding image IjColouring information determine described subpoint T jColouring information K j, and determine at described subpoint T jOn have the probability β of object surfaces j
In addition, as illustrating among the present embodiment 2-1, in described projecting plane information-display surface information conversion unit 606 with the colouring information on described projecting plane and to have probability transformation be that described image generates face, promptly is created on the colouring information and the luminance distribution rate of the point (display dot) on the face of the image that shows on the display surface of described image-display units 7.
The processing from described step 1 to described step 5 that described video generation device 6 carries out illustrating in present embodiment 2-1 is gone up the image that shows thereby be created on described DFD.That is, in described video generation device 6, also can not carry out the processing of such in the past accurate 3D shape of obtaining object.Therefore, even there is not the device of high processing ability, also can at a high speed and easily be created on described DFD and goes up the image that shows.
In addition, described video generation device 6 for example also can be realized by computing machine and the program of being carried out by described computing machine.In this case, also can in described computing machine, carry out put down in writing with present embodiment 2-1 in the program of the suitable order of the processing procedure that illustrates.And at this moment, described program for example can be recorded in magnetic, electrical, optical and learn in the recording medium and provide, and also can utilize networks such as internet to provide.
In addition, use the image display system of described video generation device 6 for example to consider structure as shown in figure 36.At this moment, described subject image photography unit 8 can be arranged on the place of observing the space of described image-display units (DFD) 7 near observer User, also So Far Away geographically can be set.Be provided with in described subject image photography unit 8 under the situation of So Far Away geographically, captured image also can utilize Network Transmission such as internet to described video generation device 6.
In addition, as shown in figure 36, used the image display system of described video generation device 6 not only can be applied to the situation that certain observer User observes certain subject Obj, also can be applied in the intercommunication system of so-called videophone or video conference etc.In this case, also can be for example such as shown in figure 37, video generation device 6A, 6B, image-display units (DFD) 7A, 7B, subject image photography unit 8A, 8B, referenced viewpoints setup unit 9A, 9B are set respectively in each each residing space of observer UserA, UserB.Then, for example when connecting video generation device 6A, the 6B that is provided with in described each residing space of observer UserA, UserB by networks such as internet 10, observer UserA can observe the three-dimensional image of the observer UserB that generates according to the image of being taken by subject image photography unit 8B on described image-display units 7A.Similarly, observer UserB can observe the three-dimensional image of the observer UserA that generates according to the image of being taken by subject image photography unit 8A on described image-display units 7B.
In addition, under the situation that is applied to such intercommunication system, described video generation device 6A, 6B needn't be structure shown in Figure 35, and some among described video generation device 6A, the 6B also can be the general communication terminal with structural unit as shown in Figure 35.And, also can distribute each structural unit shown in Figure 35 to described video generation device 6A, 6B.
In addition, as shown in figure 37, if network 10 is provided with other video generation device 6C, even then in described observer UserA, the residing space of UserB, described video generation device 6A, 6B are not set, still can utilize video generation device 6C on the described network 10 to obtain three-dimensional image in described image-display units (DFD) 7A, the last objects displayed of 7B.
In addition, in image generation system shown in Figure 37, the user is UserA, UserB two people, but also can be applied in the communication system between the more observer (user).
In addition, in Figure 36 and Figure 37, described subject image photography unit 8 shows the camera unit that is made of 4 video cameras, but described video camera can be 2 or 3, also can be for more than 5 or 5.And the configuration of described video camera can be disposed on straight line or curve one-dimensionally, also can be with two-dimentional trellis configuration on plane or curved surface.
As discussed above, according to the image generating method of present embodiment 2-1,, also may be displayed on the observer and it seems than natural three-dimensional image even do not obtain the 3D shape accurately of shown object.
In addition, in the image generating method of present embodiment 2-1, in described step 4, preestablish described shooting unit
Figure C20048001733300791
And handle, but be not limited thereto, for example also can be by the processing of program, Yi Bian generate shown treatment of picture, Yi Bian dynamically set the shooting unit of the condition that meets observer's appointment.At this moment, for example, be thought of as the observer and import described degree of correlation Q from described image condition input block jDistribution or condition such as threshold value, and search meets the shooting unit of this condition on one side, Yi Bian when carrying out the processing of described step 306, can show the approaching three-dimensional image of image of wishing with described observer.
In addition, in the image generating method of present embodiment 2-1, to obtain the coloured image of the point (pixel) on red to use (R), green (G), blue (B) trichromatic colouring information represent images, and the situation that forms the 3D shape of described object is that example is illustrated, but in the method for displaying image of present embodiment 2-1, be not limited to described coloured image, also can obtain and use brightness (Y), aberration (U, V) come the black white image of the each point (pixel) on the represent images, and obtain the 3D shape of described object.At obtained image is under the situation of described black white image, also can use described monochrome information (Y) as the information suitable with described colouring information, obtains 3D shape by the process that illustrates among the present embodiment 2-1, and generates described two dimensional image.
(embodiment 2-2)
Figure 38 to Figure 42 is the synoptic diagram that is used to illustrate any visual point image generating method of embodiment 2-2, Figure 38 is the process flow diagram of an example of the whole processing procedure of expression, Figure 39 is the figure that the principle of playing up is described, Figure 40 is the figure of the problem points of explanation when generating any visual point image, Figure 41 (a) and (b) are figure of the method for the problem points of explanation when solve generating any visual point image, and Figure 42 is that will to have probability transformation be the process flow diagram of an example of the processing procedure of transparency in expression.
In described embodiment 2-1, enumerate the 3D shape of using the described subject that in described step 3, obtains and be created on the example that described DFD has the method for the two dimensional image that shows on described each picture display face of device of a plurality of picture display faces like that, but the three-dimensional shape model of described subject is not limited thereto, and also can use when generating the two dimensional image of the described subject of seeing from any viewpoint.At this moment, as shown in figure 38, be that the 3D shape of after described step 3, playing up, be about to described subject becomes the processing of the step 11 of the two dimensional image of seeing from described observer's viewpoint with described embodiment 2-1 difference.At this moment, what illustrate the processing of the 3D shape that obtains described subject from described step 1 to step 3 and the described embodiment 2-1 is the same, so omit detailed explanation.
In addition, in any visual point image generating method of present embodiment 2-2, in the step of playing up 11, for example as shown in figure 39, the colouring information of the each point (pixel) on any visual point image of described demonstration is by to be seen as the subpoint T that overlaps with some A on described any visual point image from described observer's viewpoint P j(j=1,2 ..., colouring information K M) jCarry out that hybrid processing determines.At this moment, described hybrid processing is for example with the described probability β that exists jValue to described each subpoint T jColouring information K jBe weighted and mix the colouring information K of the some A on the image of described generation AFor example calculate by following formula 59.
[formula 59]
K A = Σ j = 1 M β j K j
But state in the use under the situation of hybrid processing of formula 59, for example, shape or the relation of the position between referenced viewpoints R and the virtual view P according to subject, the colouring information of the point on the image that is generated differs widely with the colouring information of the body surface of reality, or does not include in the effective color space.Here, consider on the object of reality, overlapped two projecting plane L with position relation shown in Figure 40 1, L 2, referenced viewpoints R, virtual view P situation.At this moment, if for the subpoint T that is seen as coincidence from described referenced viewpoints R 1, T 2Determine to exist probability β by the method that illustrates among the described embodiment 2-1 1, β 2, β then 1Be roughly 0, β 2Be roughly 1.Similarly, if for the subpoint T ' that is seen as coincidence from described referenced viewpoints R 1, T ' 2Determine to exist probability β ' by the method that illustrates among the described embodiment 2-1 1, β ' 2, β ' then 1Be roughly 1, β ' 2Be roughly 0.
At this moment, when using above-mentioned formula 59, by to be seen as the subpoint T ' that overlaps with some A on the described image surface from described virtual view P 1, T 2Colouring information K ' 1, K 2Carry out and the described probability β ' that exists 1, β 2The colouring information K of point A of the image surface of described virtual view P is obtained in corresponding weighted sum ALike this, because current β ' 1, β 2All be roughly 1, so the colouring information K of described some A ABe K A=K ' 1+ K 2
But, because when observing described object Obj from described virtual view P, subpoint T ' 1Be projected a T 2Block, so the original colouring information of the some A on the image surface is K A=K 2That is the colouring information K of the some A on the image that is generated, AThan the original colouring information K ' that risen 1The brightness of (R, G, B) each component.
In addition, at this moment, at subpoint T ' 1, T 2Colouring information K ' 1, K 2Each component have under the situation of big brightness, the some A colouring information K AThe scope that surpasses effective color space.Therefore, need cut out processing so that include in the effective colouring information scope.
Therefore, in order to solve such problem, for example, each subpoint setting had a plurality of other transparencies of level that do not see through from penetrating into based on the probability that exists of described subpoint.At this moment, the hybrid processing of colouring information of each point of image that is used for obtaining described generation is from handling successively towards near subpoint from the viewpoint of described generation image subpoint far away, the colouring information that in the hybrid processing till certain subpoint, obtains according to the ratio corresponding with described transparency to the colouring information of this subpoint and the colouring information that obtains in the hybrid processing till the subpoint before to it divide in carrying out and obtain.And at this moment, the colouring information that obtains by described hybrid processing is the colouring information in certain stage and the interior branch of the colouring information of next stage.
For the principle of hybrid processing that described colouring information is described, for example consider shown in Figure 41 (a), in color space V, to have set projecting plane L j(j=1,2 ..., M), subpoint T j, expression has the vector K of the subpoint colouring information of red (R), green (G), blue (B) component jSituation.At this moment, suppose that described color space V is by following formula 60 expressions.
[formula 60]
K j∈V,V≡{(R,G,B)|0≤R≤1,0≤G≤1,0≤B≤1}
In addition, suppose described subpoint T jTransparency jSatisfy the condition of following formula 61.
[formula 61]
0≤α j≤1
At this moment, the colouring information D that in the hybrid processing till variable j=m, obtains mRepresent by following formula 62 and formula 63 such recursion formula.And, be seen as the most forward projecting plane L from described virtual view P M, the colouring information D when promptly being mixed into variable j=M MBecome the colouring information K of the some A on the image surface of described generation image A
[formula 62]
D m=α mK m+(1-α m)D m-1
[formula 63]
D 1=α 1K 1
And this moment, according to the relation between above-mentioned formula 61 and the formula 62, shown in Figure 41 (b), described colouring information D mBe vector K in color space V mWith colouring information D M-1Between the internal point of division.Therefore, described colouring information D mAt K m∈ V, D M-1During ∈ V, become D m∈ V.
So, as illustrating in first embodiment, when satisfying the condition of above-mentioned formula 60 and formula 61, for the colouring information D of described virtual view P MGuaranteed D M∈ V.
That is, with described subpoint T jColouring information K jAnd transparency jBe set at when satisfying above-mentioned formula 60 and formula 61 the colouring information D of the point A of described generation image MNecessarily can include among the color space V.
Like this, using transparency jThe colouring information hybrid processing time, when same subject generates from image that a plurality of virtual views are seen, even calculate the colouring information and the transparency of subpoint according to a certain viewpoint (referenced viewpoints), if this colouring information and transparency satisfy the condition of above-mentioned formula 60 and formula 61, also can in suitable colouring information scope, generate all images that will generate.
Therefore, in the generation method of any visual point image of present embodiment 2-2, for example, after step 30609 shown in Figure 30 or in carrying out the described step of playing up 11, carry out the described probability β that exists jBe transformed to transparency jProcessing.
For example as shown in figure 42, with the described probability β that exists jBe transformed to transparency jProcessing at first with subpoint T jInitialization and be made as j=M (step 1101).Then, with described subpoint T MTransparency MBe made as α MM(step 1102).
Then the value with variable j is updated to j=j-1 (step 1103).Then, differentiate transparency J+1Whether be 1 (step 1104).Here, in transparency J+1Be α J+1, for example, obtain described transparency at ≠ 1 o'clock according to following formula 64 j(step 1105).
[formula 64]
α j = 1 Π m = j + 1 M ( 1 - α m ) β j
In addition, in described transparency J+1Be under 1 the situation, for example, to be made as α j=1 (step 1106).In addition, in described step 1105, obtain described transparency jThe time, be not limited to above-mentioned formula 64, also can use other formula to obtain.And, though omitted detailed explanation, in described step 1106, in fact also can be with transparency jBe made as value arbitrarily, so can also be the value beyond 1.
Then, differentiate the processing whether carried out from described step 1104 to step 1106 up to variable j=1 (step 1107).Here, also do not finish, then return described step 1103 if handle, and re-treatment.
Carry out described step 1104 to after the processing of step 1106 is till variable j=1, will be seen as the subpoint T that overlaps with some A on the image surface from described observer's viewpoint P jThe described probability β that exists jBe transformed to transparency jProcessing finish.Then, use the hybrid processing of above-mentioned formula 62 and formula 63, obtain the colouring information D of the some A on any visual point image MThen, when having carried out this processing, can obtain any visual point image according to described observer's viewpoint P for all points (pixel) on described any visual point image.
In addition, the basic structure that generates the video generation device of so any visual point image be with described embodiment 2-1 in the same structure of video generation device that illustrates, unit as suitable with projecting plane information shown in Figure 35-display surface information conversion unit 606 also can have the unit that carries out aforesaid hybrid processing.Therefore, omission is about the explanation of device.
As discussed above, according to the image generating method of present embodiment 2-2,, also can be created on the observer and it seems than natural any visual point image even do not obtain the 3D shape accurately of object.
In addition, even in any visual point image display packing of present embodiment 2, described shooting unit Also can for example pass through routine processes, Yi Bian generate shown treatment of picture, Yi Bian dynamically set the shooting unit of the condition that meets observer's appointment.At this moment, for example, import described degree of correlation Q by described image condition input block if be thought of as the observer jDistribution or condition such as threshold value, and search meets the shooting unit of this condition on one side, carries out the processing of described step 306 on one side, then can show the three-dimensional image of the image that approaching described observer wishes.
In addition, under the situation of the image generating method of present embodiment 2-2 too, obtained image also can be a kind of in coloured image, the black white image, under the situation of black white image, also can use monochrome information (Y) as the information suitable with described colouring information, the processing that in carrying out embodiment 2-1 as described, illustrates and after having obtained the 3D shape of object, generate virtual visual point image by the process that illustrates among the present embodiment 2-2.
(effect of second embodiment)
As mentioned above, the image generating method of second embodiment is when obtaining the 3D shape of described subject, set a plurality of projecting planes, the probability (having probability) of body surface existence is provided for the point (subpoint) on described each projecting plane that is seen as coincidence from described referenced viewpoints.Promptly, not to suppose as in the past to have object surfaces and will obtain the 3D shape of subject accurately, exist probability to be present on described each subpoint and obtain the 3D shape of described subject with certain but be assumed to be described object surfaces from the projecting plane that described referenced viewpoints is seen as each subpoint of coincidence.Like this, estimate from described referenced viewpoints be seen as object surfaces on a certain direction apart from the time, according to describing the low part of reliability of this estimation faintly with the corresponding ratio of probability that exists of described each subpoint.Therefore, when generating image based on the 3D shape of described object, the discontinuous noise that produces in the time of can making the distance estimations mistake of such in the past body surface becomes not remarkable, and can generate the image that looks very natural.
In addition, as previously mentioned, when a plurality of images that comprise in according to the described shooting unit in the described image of obtaining calculate and have probability, for example, since contain (occlusion area) influence and from certain regional viewpoint can't see be positioned at certain subpoint near the situation of body surface under, can calculate the probability of having got rid of behind the image of taking from this viewpoint that exists, thereby the reliability that has probability of described each subpoint improves.
In addition, determine described when having probability, under the situation that can suppose the described probability density distribution that has a probability to a certain degree, also can be according to the relatedness computation metewand value of described each subpoint, and based on determining to exist probability by this metewand value being carried out the distribution function that has probability that statistical treatment obtains.Like this, determine to exist under the situation of probability carrying out statistical treatment, can prevent that from there be reliability low of probability in caused by noise on the obtained image.
[the 3rd embodiment]
Then, the 3rd embodiment of the present invention is described.The 3rd embodiment mainly is the pairing embodiment of claim 22~claim 29.In the 3rd embodiment, based on obtaining the 3D shape of the subject that described image, reflects by changing focusing from a plurality of images of taking (many focus charts picture) from a viewpoint, and the image of the described subject seen from viewpoint (virtual view) arbitrarily of generation.Promptly, obtain first, second embodiment of the 3D shape of subject with respect to a plurality of images that obtain based on taking subjects from a plurality of viewpoints, present embodiment be characterised in that use from a viewpoint by changing focusing from a plurality of images of taking.In addition, in the present embodiment, the 3D shape of described subject also shows with the multilayer planar that uses texture mapping method.In addition, at the figure that is used for illustrating the 3rd embodiment, the part with identical function is marked with same numeral.
Figure 43 to Figure 51 is the synoptic diagram of principle that is used to illustrate the image generating method of present embodiment, Figure 43 and Figure 44 are the figure of the setting example of expression projecting plane and referenced viewpoints, Figure 45 is the colouring information of explanation subpoint and to the figure of definite method of focal power, Figure 46 to Figure 48 is the figure that has method of determining probability of explanation subpoint, Figure 49 is the figure of the generation method of the image seen from virtual view of explanation, Figure 50 is the figure of problem points in the image generating method of explanation present embodiment, and Figure 51 is the figure that explanation solves the method for the problem points in the image generating method of present embodiment.
In image generating method of the present invention, as mentioned above, based on obtaining the 3D shape of the subject that described image, reflects by changing focusing from a plurality of images of taking (many focus charts picture) from a viewpoint, and the image of the described subject seen from viewpoint (virtual view) arbitrarily of generation.And at this moment, the 3D shape of described subject shows with the multilayer planar that uses texture mapping method.
When using described texture mapping method to show the 3D shape of described subject, for example on the virtual three dimensions that in the video generation device of computing machine etc., sets, such as shown in figure 43, set viewpoint C, the projecting plane L of sandwich construction of video camera j(j=1,2 ..., M), be used to obtain the referenced viewpoints R of the 3D shape of described subject.And, at this moment, if obtain the shape of described subject from different images according to N focusing, then as shown in figure 44, described projecting plane L jBe set in and described each image I mg i(i=1,2 ..., focusing N) is from f iConsistent distance is last.
Here, as shown in figure 44, be the subpoint T that overlaps considering when described referenced viewpoints R observes certain direction j(j=1,2 ..., in the time of N), in existing model adquisitiones, the surface that is thought of as described subject is present in described subpoint T jArbitrarily a bit on.At this moment, for example according to described each subpoint T jThe height to focal power determine that the surface of described subject is present in described subpoint T jIn which subpoint on.Therefore, at first, determine to be seen as each subpoint T of coincidence from described referenced viewpoints R jColouring information K jAnd to focal power Q j
At definite described subpoint T jColouring information K jAnd to focal power Q jThe time, as shown in figure 45, based on described subpoint T jCorresponding described each image I mg iOn point (corresponding point) G iColouring information κ iAnd at described corresponding point G iThe degree of last focusing (to focal power) is determined.At this moment, described subpoint T jColouring information K jFor example be made as described each corresponding point G iColouring information κ iMean value, perhaps consistent corresponding point G on the space I=jColouring information κ I=jIn addition, described subpoint T jFocal power is determined by the clear or fuzzy degree of point on the image or the image in the tiny area.Described computing method to focal power have the whole bag of tricks based on Depth from Focus (depth of focus) theory or Depth from Defocus (defocusing the degree of depth) theory.In addition, for Depthfrom Focus theory or Depth from Defocus theory, for example please refer to following document.
Document 8:A.P.Pentland: " A New Sense for Depth of Field, " IEEETrans.On Pattern Analysis and Machine Intelligence, Vol.PAMI-9, No.4, pp.523-531 (1987).
Document 9:Murali Subbarao and gopal Surya: " Depth from Defocus:A Spatial Domain Approach; " International Journal of ComputerVision, 13,3, pp.271-294, Kluwer Academic Publishers.
Document 10: former full grand, rich U.S. of assistant 々 wood of stone: " closing burnt method To I Ru high speed three-D shape Meter Measuring ", accurate engineering Hui Chi, Vol.63, No.1, pp.124-128, accurate engineering meeting.
Document 11: big field boundary light is the youth too, and hillside plot is luxuriant: " real Time Inter pan focus sensible micro mirror カ メ ラ ", O plusE, Vol.22, No.12, pp.1568-1576,2000, new skill Intraoperative コ ミ ユ ニ ケ one シ ヨ Application ズ.
For example by more described each corresponding point G iThe size of local space frequency obtain described to focal power Q j
Described Depth from Focus theory or Depth from Defocus theory are to analyze the method for focusing from different a plurality of images and the described object surfaces shape of instrumentation.At this moment, for example, can be estimated as with change described focusing from and there is described object surfaces in the focusing of the highest image of local space frequency in the image taken on corresponding distance.Therefore, for example use the local space frequency evaluation function shown in the following formula 65 to calculate described subpoint T jTo focal power Q j
[formula 65]
Q = 1 D Σ x = x i x f Σ y = y i y f { Σ p = - L c L c Σ q = - L r L r | f ( x , y ) - f ( x + p , y + q ) | }
Here, f is the deep or light value of pixel, and D is the constant that is used for normalization, is all pixel counts of estimating, (Lc ,-Lr)-(Lc, Lr) and (xi, yi)-(xf yf) is the zonule that is used to disperse to estimate with smoothing respectively.
And, for all subpoint T that are seen as coincidence from described referenced viewpoints R jCarry out such processing, such as shown in figure 46, determining described each subpoint T jColouring information and to focal power Q jAfterwards, based on described each subpoint T jTo focal power Q jHeight estimate the distance that described subject surface exists.At this moment, be seen as each subpoint T of coincidence from described reference point R jTo focal power Q jFor example such shown in Figure 47 (a), certain subpoint T is only arranged nTo focal power Q nShown under the situation of very high value, can be estimated as at this subpoint T nOn have the surface of described subject, its reliability is also very high.
But, according to the pattern (texture) on the surface of the shape of the shooting condition of obtained image, described subject or described subject, for example also just like there not being the situation of focal power Q being got the subpoint T of distinctive big value shown in Figure 47 (b) like that.In the example shown in Figure 47 (b), for example, subpoint T n, T n *To focal power Q be than other subpoint T focal power Q is exceeded the value of some, so think at described subpoint T n, T n *Some on have the surface of described subject.But, because some subpoint T n, T n *All be not distinctive big value to focal power Q, so under the situation of having selected some subpoints, its reliability is low.And, according to circumstances, also can select wrong subpoint sometimes.And, if there is estimation (selection) mistake of the subpoint on described subject surface, big noise then appears on the image that is generated.
Under these circumstances, in order to improve in the distance on the surface of estimating described subject, the reliability when promptly being present on which projecting plane, for example, need dwindle focusing from the interval, use more images, or be not only peak, the well known function that also value to focal power before and after it will be applied to normal distyribution function for example etc. is estimated the distance on described subject surface.
But, if use more images, then produce the processing time increase, the focusing in the camera unit from the become problem of difficulty of adjusting.And, certain focusing under the situation of focusing, have the scope that is called as the depth of field before and after it, if the point in the scope of the described depth of field, even then not described focusing from, fuzzy on the captured image in fact do not take place yet.Therefore, the effect that described focusing is obtained from segmenting is till the interval of described depth of field degree, even and to segment effect more also not obvious.And, low in the spatial frequency of the texture on described subject surface, be under with low uncertainty, the same situation of pattern, even change focusing from the variation that does not also produce on the image.Under these circumstances, be difficult to carry out the high estimation of reliability on the principle.
Thus, usually, even under the described situation of distribution shown in Figure 47 (b) to focal power Q, also suppose described be peaked subpoint T to focal power Q jOn have the surface of described subject.Therefore, cause the distance estimations mistake on described subject surface, the situation that occurs big noise on the image that is generated is more.
Therefore, in image generating method of the present invention, that the distance on described subject surface is not specific for certain a bit, promptly is seen as subpoint T of coincidence from described referenced viewpoints R jIn more arbitrarily, and such as shown in figure 48, provide and described subpoint T jTo focal power Q jThe height correspondence have a probability β jAt this moment, the described probability β that exists jNeed be at all subpoint T that are seen as coincidence from described referenced viewpoints R jHave a probability β jSet in satisfy the condition of following formula 66 and formula 67.
[formula 66]
0≤β j≤1
[formula 67]
Σ j = 1 M β j = 1
Therefore, if hypothesis projecting plane L jThere is M, be seen as the subpoint T of coincidence from described referenced viewpoints R jThere is M, then k projecting plane L kOn subpoint T kHave a probability β kObtain by following formula 68.
[formula 68]
β k = Q k Σ j = 1 M Q j
Like this, for the subpoint T that is seen as coincidence from described referenced viewpoints R j, can carry out described each subpoint T all directions jDetermine to exist probability β jProcessing, obtain the 3D shape of described subject thus.And, for example such as shown in figure 49 when generating the image of the described subject of seeing from described virtual view P, setting described projecting plane L jThe space on set described virtual view P, and the colouring information of the each point on the image of determining to be generated.At this moment, according to be seen as the subpoint T that overlaps with described some A from described virtual view P jColouring information K jAnd there is a probability β j, for example use following formula 69 to determine the colouring information K of the some A on the image of described generation A
[formula 69]
K A = Σ j = 1 M β j K j
Then, when stating formula 69 in the use and determining the colouring information of being had a few on the image of described generation, obtained the image (virtual visual point image) of the described subject seen from described virtual view P.At this moment, for example such shown in Figure 47 (a) on the described virtual visual point image that is generated, certain subpoint T is only arranged nTo focal power Q nHave under the situation of distinctive big value, this subpoint T is only arranged nHave a probability β jBigger.Therefore, when stating formula 69 in the use and coming that colouring information carried out hybrid processing, described subpoint T nColouring information K nColouring information K to the some A on the described generation image AContribution rate improve, clearly describe.In addition, shown in Figure 47 (b), comparatively under the situation of difficult, the probability β that exists of each subpoint T becomes less value in the distance estimations on described subject surface, therefore to the colouring information K of the some A on the described generation image AContribution rate reduce, describe faintly.Consequently, the discontinuous and very big noise that produces in the time of can being reduced in the distance estimations mistake, and be created on the observer and it seems better pictures.
In addition, image generating method of the present invention can be by installing as the texture of the basic skills of computer graphical simply.Therefore, the three-dimensional picture hardware that is carried in can the personal computer by popular style is handled well, and the computing machine load alleviates thereupon.
But, in image generating method of the present invention, for each the subpoint T that is seen as coincidence from certain such viewpoint of described referenced viewpoints jCalculating is to focal power Q j, and determine the described probability β that exists jTherefore, shape or the relation of the position between described referenced viewpoints and the described virtual view according to subject comprise a plurality of subpoints that are seen as coincidence from described virtual view P sometimes and have the very high subpoint of probability more than two.Under these circumstances, if according to the described colouring information that exists the corresponding ratio of probability to mix described each subpoint, then the colouring information of the point on the image that is generated sometimes can surpass effective colouring information scope.
Here, the same with the embodiment of explanation before this, as shown in figure 50, consider in the residing space of subject Obj, to have set two projecting plane L 1, L 2, referenced viewpoints R, virtual view P situation.At this moment, suppose subpoint T 1, T 2, T 1', T 2' on colouring information be respectively K 1, K 2, K 1', K 2', the probability that exists of subject is β 1, β 2, β 1', β 2'.
That in addition, supposes to determine on by the straight line of referenced viewpoints R described subject exists probability β 1, β 2, β 1', β 2' time, in example shown in Figure 50, because at subpoint T 1' and T 2Near have the surface of subject Obj so this subpoint T 1' and T 2There is likelihood ratio subpoint T in the place 1And T 2' height.At this moment, described each subpoint T 1, T 2, T 1', T 2' have a probability β 1, β 2, β 1', β 2' as shown in the formula 70 and formula 71 shown in.
[formula 70]
β 1 ≅ 0 , β 2 ≅ 1
[formula 71]
β 1 ′ ≅ 1 , β 2 ′ ≅ 0
At this moment, according to above-mentioned formula 69, according to the described probability β that exists 1', β 2To be seen as the subpoint T ' that overlaps with some A on the described image surface from described virtual view P 1, T 2Colouring information K ' 1, K 2Be weighted summation and obtain the colouring information K of the some A on the image surface of described virtual view P A, shown in 72.
[formula 72]
K A=β 1′K 1′+β 2K 2
In addition, above-mentioned formula 72 can be approximately following formula 73 according to above-mentioned formula 70 and formula 71.
[formula 73]
Figure C20048001733300903
But, when seeing described object Obj from described virtual view P, because subpoint T ' 1On body surface be projected a T 2On body surface block, so the original colouring information of the some A on the image surface is K A=K 2That is,, compare the colouring information K of the some A on the image that is generated as following formula 73 with original colouring information AThe brightness rising K of (R, G, B) each component 1'.
In addition, at this moment, at described subpoint T ' 1, T 2Colouring information K ' 1, K 2Each component have under the situation of big brightness, the some A colouring information K AThe scope that can surpass effective color space.Therefore, need cut out processing so that include in the scope of effective color space.
Therefore, in image generating method of the present invention, exist probability to set to have a plurality of other transparencies of level that do not see through from penetrating into based on described subpoint.At this moment, the hybrid processing of colouring information of each point that is used for obtaining described generation image is from beginning in turn to handle towards near subpoint from the viewpoint of described generation image subpoint far away, and the colouring information that obtains in the hybrid processing till certain subpoint is to obtain by dividing in according to the ratio corresponding with described transparency the colouring information that obtains in the colouring information of this subpoint and the hybrid processing till the subpoint before this being carried out.And at this moment, the colouring information that obtains by described hybrid processing is the colouring information in certain stage and the interior branch of next colouring information.
For the principle of hybrid processing that described colouring information is described, for example consider as Figure 51 (a) shown in like that, in color space V, set projecting plane L j(j=1,2 ..., M), subpoint T j, have red (R), green (G), blue (B) component and represent subpoint T jThe vector K of colouring information jSituation.At this moment, described color space V is expressed as following formula 74.
[formula 74]
K j∈V,V≡{(R,G,B)|0≤R≤1,0≤G≤1,0≤B≤1}
In addition, described subpoint T jTransparency jBe set at the condition that satisfies following formula 75.
[formula 75]
0≤α j≤1
At this moment, the colouring information D that in the hybrid processing till variable j=m, obtains mRepresent by following formula 76 and formula 77 such recursion formula.And, be seen as the most forward projecting plane L from described virtual view P M, i.e. colouring information D when being mixed into variable j=M MBecome the colouring information K of the some A on the image surface of described generation image A
[formula 76]
D m=α mK m+(1-α m)D m-1
[formula 77]
D 1=α 1K 1
And this moment, since the relation between above-mentioned formula 75 and the formula 76, described colouring information D mBe vector K in color space V mWith colouring information D M-1Between the internal point of division.Therefore, shown in Figure 51 (b), described colouring information D mBe K m∈ V, D M-1During ∈ V, become D m∈ V.
So, if satisfy the condition of above-mentioned formula 74 and formula 75, then for the colouring information D of described virtual view P M, it is such to guarantee to become following formula 78 as mentioned above.
[formula 78]
D M∈V
That is, if with described subpoint T jColouring information K jAnd transparency jBe set at and satisfy above-mentioned formula 74 and formula 75, the colouring information D of the some A of then described generation image MNecessarily can include in the suitable color space V.
Like this, if use transparency jThe hybrid processing of colouring information, when then in same subject, generating from image that a plurality of virtual views are seen, even calculate the colouring information and the transparency of subpoint according to a certain viewpoint (referenced viewpoints), if this colouring information and transparency satisfy the condition of above-mentioned formula 74 and formula 75, then also can in the scope of suitable colouring information, generate all images to be generated.
Therefore, for example in example shown in Figure 50, respectively to described each subpoint T 1, T 2, T 1', T 2' be set as follows the transparency that formula 79 and formula 80 are provided 1, α 2, α 1', α 2'.
[formula 79]
α 2=β 2,α 1=1
[formula 80]
α 2′=β 2′,α 1′=1
Here, colouring information for the each point that obtains described virtual view P, from carrying out hybrid processing from described virtual view P subpoint far away successively towards near subpoint, the colouring information that obtains in the hybrid processing till certain subpoint is to obtain by dividing in according to the ratio corresponding with described transparency the colouring information that obtains in the colouring information of this subpoint and the hybrid processing till the subpoint before this being carried out.Like this, the colouring information D of the some A of the image of seeing from virtual view P AIt is such to become following formula 81.
[formula 81]
K A=α 2K 2+(1-α 21′K 1
At this moment, above-mentioned formula 81 becomes following formula 82 according to above-mentioned formula 70, formula 71, formula 79, formula 80, becomes good being similar to of original colouring information.
[formula 82]
Figure C20048001733300921
As mentioned above, directly using during the described image that has a probability β generates, though it is no problem under the referenced viewpoints R situation identical with the viewpoint P of the image that is generated, but near the occlusion area in subject under both different situations, produce the increase of brightness sometimes, with respect to this, in the image that will exist probability β to be transformed to transparency generates, has the effect that prevents this phenomenon.
In addition, in directly using the image generation that has probability β, under the referenced viewpoints R situation different with virtual view P, when the colouring information of a plurality of subpoints had been carried out hybrid processing, the colouring information of the point on the image of seeing from virtual view P was not included the assurance in the scope of effective color space in.Therefore, for example, need treatment for correcting.With respect to this, in the described image that exists probability β to be transformed to transparency is generated, do not need such correction.
In addition, exist during image that probability β is transformed to transparency generates, have the subject that also can show semi-transparent mistake effectively, can use effect of the present invention widely for the more subject in the real world with described.
An example of the mathematical model of the prerequisite that becomes processing when using image generating method of the present invention to generate image then, is described.
Figure 52 and Figure 53 are the synoptic diagram that is used to illustrate the mathematical model of image generating method of the present invention, the figure of the relation between the point on the image that Figure 52 is expression subpoint, corresponding point, generated, Figure 53 are the point on the explanation space and the figure of the transform method between the pixel on the image.
When using image generating method of the present invention to generate, for example, obtain the colouring information or the monochrome information of certain point on the image of seeing from described virtual view by perspective projection transformation from image that described virtual view P sees.Here, consider viewpoint C, the projecting plane L of video camera j(j=1,2 ..., M), virtual view P for example is set at situation such shown in Figure 52.
Usually, with the subpoint T on the three dimensions m(X, Y, Z) project to the image seen from described virtual view P, be that (x, matrix y) is provided by the matrix of 3 row, 4 row for point on the image that is generated.At this moment, described projection matrix and expression are the matrix Φ of perspective projection transformation of the focal distance f at center with the initial point 0Explanation as the carrying out in first embodiment etc.
In addition, (x, y) (u, v) the relation between is also as illustrating in first embodiment with the digital picture coordinate for the image coordinate shown in Figure 53.In addition, when carrying out the writing and read of two-dimensional array, (u's described digital picture coordinate v) quantizes, but in the following description, then is made as unless otherwise specified and gets successive value, is made as to carry out suitable discretize handle when the visit array.And, also can carry out the conversion of the caused image fault of corrective lens aberration.
(embodiment 3-1)
Figure 54 to Figure 57 is the synoptic diagram that is used to illustrate based on the image generating method of embodiments of the invention 3-1, Figure 54 is the process flow diagram of the generative process of presentation video, Figure 55 is the figure of the establishing method of explanation subpoint string, Figure 56 is the process flow diagram of concrete example of processing of the step 10305 of expression Figure 54, and Figure 57 is the figure of the method played up of explanation.
The image generating method of present embodiment 3-1 is to use aforesaid principle to generate the method for image, like that, has: obtain the step 101 of focusing from different a plurality of images shown in Figure 54; Set the step 102 of observer's viewpoint (virtual view); Obtain the step 103 of the 3D shape of subject based on the described image of obtaining; And generate the step 104 that (playing up) observes the image of the subject 3D shape that obtains the described step 103 from described virtual view.
In addition, described step 103 has: the step 10301 of setting the projecting plane of sandwich construction; Be identified for obtaining the step 10302 of referenced viewpoints of the 3D shape of subject; Set the step 10303 of subpoint string, corresponding point etc.; Guarantee to store the texture array, be the colouring information of described subpoint and the step 10304 that has the zone of probability; And the colouring information of definite described subpoint and the step 10305 that has probability.
In the image generating method of present embodiment 3-1, for example shown in Figure 54 like that, at first obtain change focusing from and take a plurality of images (step 101) that subject obtains.At this moment, obtained image can be a coloured image, also can be black white image, but in present embodiment 3-1, be assumed to be the coloured image of obtaining with the each point (pixel) on red (R), green (G), blue (B) trichromatic colouring information represent images and describe.
Then, set the position (virtual view) (step 102) that the observer sees described subject.Then, the image of the described subject that obtains of use is obtained the 3D shape (step 103) of described subject.Then, after obtaining the 3D shape of described subject, generate the image (step 104) when described virtual view is seen described subject.
At this moment, in described step 103, for example shown in Figure 54 like that, at first set the projecting plane L of sandwich construction j(j ∈ J, J ≡ 1,2 ..., M}) (step 10301).At this moment, described projecting plane L jFor example as shown in Figure 43, the projecting plane with flat shape is set abreast.And, at this moment, being provided with at interval for example as shown in Figure 44 of described projecting plane, preferably with described step 101 in the focusing of each image of obtaining from consistent, but also can be inconsistent.
Then, determine to obtain the viewpoint of the 3D shape of described subject, in other words, point (referenced viewpoints) R (step 10302) of the benchmark when obtaining the probability that on subpoint, has the subject surface.At this moment, described referenced viewpoints R can be the point identical with described virtual view P, also can be different points.And, when generating the image of the subject of seeing from a plurality of virtual view P continuously, also can be taken on its centre of gravity place.
Then, set point (steps 10303) such as (corresponding point) on the subpoint string that group constituted by the subpoint on the straight line of described referenced viewpoints R, the image corresponding with described subpoint.At this moment, described subpoint string is defined as straight line and described projecting plane L by described referenced viewpoints R for example as shown in Figure 13 jIntersection point (subpoint) T jSet.At this moment, if subpoint string S is written as S={T j| j ∈ J}, and its set is made as σ, then become S ∈ σ.
Then, for example in the array (texture array) (step 10304) of the image of guaranteeing to keep to want texture to described each projecting plane on the storer of the device that generates described image.At this moment, the array of being guaranteed for example has 8 colouring information (R, G, B) and has probabilistic information at each pixel, as the texture information corresponding with the position of described subpoint.
In addition, in described step 10304, also set the two-dimensional digital coordinate (U of the pixel of described texture array j, V j) and subpoint T jThree-dimensional coordinate (X j, Y j, Z j) between corresponding relation.At this moment, for example, can be for all (U j, V j) value set (X j, Y j, Z j) value as table, also can be only for representational (U j, V j) setting (X j, Y j, Z j) value, remaining coordinate is obtained by the interpolation processing of for example linear interpolation etc.
Then, based on the image of the subject that in described step 101, obtains, determine and each the subpoint T that in described step 10304, guarantees jCorresponding color of pixel information K jAnd there is a probability β j(step 10305).At this moment, for described colouring information and there is probability, carry out at T jScan the subpoint T on certain subpoint string S in the scope of ∈ S successively j, and in the scope of subpoint string S (∈ σ), repeat the dual circular treatment of this processing.
When carrying out the processing of described step 10305, at first such shown in Figure 56, the subpoint string S that scans is initialized as starting position (step 10305a).Then, with the subpoint T that scans jIn subpoint string S, be initialized as the starting position, for example, be made as j=1 (step 10305b).
Then, determine described subpoint T jCoordinate (X j *, Y j *, Z j *) colouring information K j(step 10305c).In described step 10305c, at first be positioned at coordinate (X having taken j *, Y j *, Z j *) the point of position the time, calculate corresponding with which position of image surface (imageing sensor).Then, with described subpoint T jPixel (U on the corresponding texture array j *, V j *) colouring information for example be made as (u Ij *, v Ij *) colouring information of (i ∈ I).
Then, determine described subpoint T jTo focal power Q jAt this moment, described to focal power Q jFor example, use above-mentioned formula 65 to calculate (step 10305d) based on the size of the local space frequency of corresponding point.
When finishing dealing with till described step 10305d, then upgrade described subpoint T j, differentiate and whether all scanned subpoint T j∈ S (step 10305e).Here too, if whole end of scan then enters next step 10305f, if scanning also finishes then returns described step 10305c.
In described step 10305e, when being judged as whole end of scan, then based on each the subpoint T that in described step 10305d, calculates jTo focal power Q j, for all subpoint T on the described subpoint string S j(j ∈ J) determines to exist probability (the having probability) β of subject on each subpoint j(step 10305f).The described probability β that exists jFor example use above-mentioned formula 68 to determine.In addition, the described probability β that exists jAs long as satisfy the condition of above-mentioned formula 66 and formula 67 basically, so also can use above-mentioned formula 68 formula in addition.
Then, upgrade subpoint string S, differentiate and whether all scanned subpoint string S ∈ σ (step 10305g).Here too, if whole ends of scan, the processing of then described step 103, be described subject 3D shape obtain end.And,, then return described step 10305b if also have the not subpoint string of scanning.
In described step 10305g,, then follow according to the described probability β that exists if differentiate for for all subpoint string ends of scan jDescribe to observe the described projecting plane L of use from described virtual view P j(j=1,2 ..., the M) image (step 104) of Biao Xian subject.Here, for example shown in Figure 57, the coordinate representation of supposing the image surface of virtual view P is u p, v pAt this moment, certain pixel p on the image surface *(u p *, v p *) colouring information K p *Be defined as being positioned at the pixel p on connection virtual view P and the image surface *Straight line on subpoint string { T j *| the colouring information { K of j ∈ J} j *| j ∈ J} multiply by the pairing probability { β that exists j *| the result of addition behind the j ∈ J} is expressed as following formula 83.
[formula 83]
K P * = Σ j = 1 M β j * K j *
And, if use above-mentioned formula 83 to determine colouring information, then can obtain the image of virtual view P for all pixels on the image surface.
In addition, use following formula 84 calculating Ks replacing above-mentioned formula 83 p *The time, even under the referenced viewpoints R situation different with the position of virtual view P, K p *Also certain assurance is brought in the scope of effective color space.
[formula 84]
K P * = Σ j = 1 M β j * K j * Σ j = 1 M β j *
In addition, the pixel that shows the scintigram image planes is here determined the process of colouring information, but is not limited thereto, and for example also can submit projecting plane L to general shape library such as OpenGL or DirectX jStructure, texture array, the data such as setting of virtual view P, entrust it to describe to handle.
More than, the generation processing of the described virtual visual point image of present embodiment 3-1 finishes, and the image that is generated is exported to the image display device of CRT (Cathode Ray Tube), LCD etc.At this moment, in the image that shows on described image display device, for example, the colouring information to the low subpoint of focal power Q that calculates in described step 10305d in the shot object image is little to the contribution of the colouring information of the point on the image that is generated.Therefore, describe faintly for the low subpoint of distance estimations reliability.Consequently, for example unlike the image that existing method generated, look like the damaged or extreme deterioration of image quality of image section, and appear as the deterioration that to ignore degree the user.
Figure 58 to Figure 61 is the synoptic diagram of expression by the schematic configuration of the device of the image generating method generation image of present embodiment 3-1, and Figure 58 is the block scheme of the structure of indication device, and Figure 59 to Figure 61 is the figure of the structure example of expression subject image photography unit.
In Figure 58, the 2nd, video generation device, the 201st, many focus charts picture (subject image) is obtained the unit, and the 202nd, the virtual view setup unit, the 203rd, setup units such as projecting plane, the 204th, the texture array is guaranteed the unit, the 205th, and colouring information/have probability determining unit, the 206th, rendering unit, the 207th, generate the image output unit, the 3rd, many focus charts picture (subject image) camera unit, the 4th, view information input block, the 5th, image-display units.And, in Figure 59, the 6th, polarization-type two-value optical system, the 7,7A, 7B be imageing sensor, the 8th, optical splitter, ObjA, ObjB are subjects.And, in Figure 60, the 9th, polarizing filter.And; In Figure 61, the 10th, zoom lens, 11a, 11b, 11c, 11d are tight shots, the 12nd, lens mount.
For example shown in Figure 58, the video generation device 2 that the image generating method of use present embodiment uses when generating image has: the subject image is obtained unit 201, and it obtains focusing from different a plurality of images; Virtual view setup unit 202, it sets the viewpoint (virtual view) of image to be generated; Projecting plane setup unit 203, it sets the projecting plane of sandwich construction on virtual three dimensions; The texture array is guaranteed unit 204, and its array with the image (texture image) to described projecting plane to be attached is assigned on the storer; Colouring information/exist probability determining unit 205, it uses by described texture array guarantees the texture array guaranteed unit 204, determines the colouring information of the point (below be called subpoint) on described each projecting plane and has probability; Rendering unit 206, its according to the described colouring information that exists the corresponding ratio of probability to mix described each subpoint, and determine each color of pixel information on the image to be generated; And generating image output unit 207, its output is by the image of described rendering unit 206 generations.
At this moment, obtain in the unit 201 at described subject image, for example, by have as polarization-type two-value optical system (for example, reference literature 12: TOHKEMY 2000-258738 communique) like that focusing from the image of the captured subject in the subject image photography unit 3 of the camera lens that changes according to polarized component.And, be not limited to described polarization-type two-value optical system, also can obtain the image of taking by camera with zoom lens (for example, reference literature 13: Japan speciallys permit No. 3303275 communique).And, in addition, Yi Bian on one side for example also can obtain and support the different camera lens of a plurality of focal lengths integratedly, switch the image that each camera lens is taken at high speed.And at this moment, described subject image is obtained unit 201 can be according to certain intervals, and for example the position of the subject that changes is constantly obtained at the interval of 30Hz successively, also can obtain the rest image of the subject of any time.And, about the image of described subject, also can obtain after taking by described subject image photography unit 3, be recorded in the image in magnetic or electricity or the optical record medium (pen recorder).In addition, the image that the image of described subject is preferably taken at synchronization, but in the position of subject or the variation of posture very slow, can be considered as being not limited thereto under the situation of resting.
In addition, described virtual view setup unit 202 is desired location, direction, field angle for example, as the parameter of the viewpoint (virtual view) of described generation image.At this moment, described virtual view can automatically be determined in described virtual view setup unit 202, also can use the information of view information input blocks such as mouse or keyboard 4 inputs to determine based on the user.And, can provide by other program, or provide by network.
In addition, setup unit 203 such as described projecting plane for example carries out the processing of the step 10301 shown in Figure 54, step 10302, step 10303.
In addition, guarantee in the unit 204 in described texture array, carry out the processing of the step 10304 shown in Figure 54, for example for each pixel maintenance and colouring information and the information that has probability correlation, for example, guarantee for red (R), green (G), blue (B) three primary colors and the described probability that exists respectively with the existing texture array of 8 bit tables.But the present invention does not rely on the existing form of such particular data table.
In addition, described colouring information/exist probability determining unit 205 to carry out the processing of the step 10305 shown in Figure 54 for example carries out the processing from step 10305a to step 10305g shown in Figure 56.And described rendering unit 206 is based on described colouring information/exist the result of probability determining unit 205 to carry out the processing of the step 104 shown in Figure 54, generates the image of the described subject of seeing from described virtual view P.
In addition, export from described generation image output unit 207 by the virtual visual point image that described rendering unit 206 generates, and show by CRT, LCD (Liquid Crystal Display), PDP image-display units such as (PlasmaDisplay Panel) 5.At this moment, described image-display units 5 for example can be the display device of two dimensional surface shape, also can be the curved display device that surrounds the user.And, as described image-display units 5, but when using the display device of stereo display, also can determine two suitable viewpoints of right and left eyes with described user by described virtual view setup unit 202, generating after the stereo-picture that described two viewpoints are looked, independently image is being shown to user's right and left eyes.And, generate the image of looking from three or above virtual view, when using the three dimensional display that can show the image that has three or above parallax, also can stereo-picture be shown more than one user.
In addition, described generation image output unit 207 can also be not only to the described generation image of described image-display units 5 outputs, for example also to unit that electricity, magnetic, optical record medium are exported.
In addition, at this moment, though omitted diagram, the storage unit of the described generation image of storage also can be set in described video generation device 2 and store described generation image, show according to the image of being stored from user's indication output and by described image-display units 5.
In addition,, for example use when being assembled with the video camera of polarization-type two-value optical system as described camera unit 3, can be according to two focusings from taking described subject Obj.Described polarization-type two-value optical system has been to use the optical system of the material that shows optical anisotropy (birefringence), for example such shown in Figure 59 (a), at the polarisation of light component by described polarization-type two-value optical system 6 is that focusing is from f under the situation of the situation of p component and s component 1, f 2Different.At this moment, if as camera unit imaging on an imageing sensor 7 like that, then the image that obtains from described imageing sensor 7, image due to the described p component and the image due to the s component, promptly with focusing from f 1The image of taking and with focusing from f 2The image of taking becomes overlapping image.Therefore, for example shown in Figure 59 (b), if separate light by described polarization-type two-value optical system 6 with optical splitter 8, and the photoimaging that makes the p component in the first imageing sensor 7A go up, the photoimaging that makes the s component on the second imageing sensor 7B, then can separate and obtain focusing from being f 1Image and focusing from being f 2Image.
Here, for example shown in Figure 59 (b), if suppose in described focusing from f 1Near have subject ObjA, in focusing from f 2Near have other subject ObjB, then shown in Figure 59 (c), the image of the described first imageing sensor 7A, promptly the image of the photoimaging by the p component becomes the image that subject ObjA is clear, subject ObjB is fuzzy.On the other hand, the image of the described second imageing sensor 7B becomes fuzzy, the subject ObjB distinct image of subject ObjA on the contrary.
In addition, when separating the image that uses described polarization-type two-value optical system 6 shootings, also can replace described optical splitter 8, and for example such shown in Figure 60 (a), between described polarization-type two-value optical system 6 and imageing sensor 7, polarizing filter 9 is set.At this moment, set polarizing filter 9 for example uses as Figure 60 (b) shown in like that, the light filter that checkerboard disposes the light filter 9A corresponding with the p component and forms with the corresponding light filter 9B of s component.At this moment, when each light filter 9A, 9B are sizes with the pixel same size of described imageing sensor 7 or n * n pixel, can obtain two images shown in Figure 59 (c) by from the image that obtains by described imageing sensor 7, respectively rejecting and p component or the corresponding pixel of s component.
In addition, when taking described focusing, also can replace using described polarization-type two-value optical system 6 from different a plurality of image, and for example as Figure 61 (a) shown in like that, use zoom lens 10.If use described zoom lens 10, then for example such shown in Figure 61 (a), can obtain four focal position f with a camera lens 1, f 2, f 3, f 4Image.
And, also can not to resemble described polarization-type two-value optical system 6 or the zoom lens 10, change the focal position by the refractive index that changes the camera lens medium, but it is such shown in Figure 61 (b), fixedly focusing from, support mutually different tight shot 11a, 11b, 11c, 11d integratedly by lens mount 12, for example, rotating mirror headstock 12 and switch each camera lens is at high speed taken image simultaneously.
As discussed above, image generating method according to present embodiment 3-1, not as existing means, to obtain the geometric model accurately of subject in all situations and all parts, but with shooting condition or position according to subject, the estimated value that can't obtain having abundant reliability by distance estimations is a prerequisite, describe faintly and reduce the contribution that image is generated for the low part of computed reliability, prevent extreme image deterioration, and clearly describe and improve the contribution that image is generated for the high part of computed reliability.Therefore, can make the deterioration of image of the low part of computed reliability not remarkable, can become at the few virtual visual point image of user's deterioration.
In addition, in the image generating method of present embodiment 3-1, owing to utilize the method for texture to obtain the 3D shape of object, and generate the image of seeing from described virtual view P, so the load in the time of can reducing by the described virtual visual point image of 2 generations of the video generation device shown in Figure 58, and can generate virtual visual point image at high speed.
In addition, described video generation device 2 needs not to be special-purpose device (computing machine), for example, also can realize by computing machine or program.In this case, generation can be in the program of carrying out each step shown in Figure 54 and Figure 56 in the computing machine, and when in described computing machine, carrying out, even general popular personal computer also can be easily and generated the few virtual visual point image of image deterioration at high speed.And at this moment, described program can be recorded in magnetic, electricity or the optical record medium and provide, and also can provide by network.
In addition, the generation method of the structure of the video generation device that illustrates among the present embodiment 3-1 and image or processing procedure are an example, purport of the present invention is the image that carries out texture on the projecting plane of sandwich construction to be given have probability, for the low part of the reliability of estimated distance, on a plurality of projecting planes, carry out texture and describe faintly.Therefore, not to break away from greatly in the scope of this purport not rely on particular treatment method or embodiment.
In addition, in the image generating method of present embodiment 3-1, be illustrated as example with the coloured image of obtaining the point (pixel) on red to use (R), green (G), blue (B) trichromatic colouring information represent images and the situation that generates described virtual visual point image, but the image generating method of present embodiment 3-1 is not limited to described coloured image, also can obtain and use brightness (Y), aberration (U, V) come the black white image of the each point (pixel) on the represent images, and generate described virtual visual point image.At obtained image is under the situation of described black white image, can use described monochrome information (Y) as the information that is equivalent to described colouring information, generates virtual visual point image by the process that illustrates among the present embodiment 3-1.
Figure 62 and Figure 63 are the synoptic diagram of schematic configuration of the image generation system of the expression video generation device that uses present embodiment 3-1, and Figure 62 is the figure of a structure example of presentation video generation system, and Figure 63 is the figure of another structure example of presentation video generation system.
The video generation device 1 of present embodiment 3-1 for example can be applied to the image generation system shown in Figure 62.At this moment, when the described view information input block 4 of user User use mouse etc. had been specified the viewpoint position of expection, direction, field angle, described video generation device 2 was obtained the image of the subject Obj that is taken by described camera unit 3.Then, in the described video generation device 1 of the image of obtaining described subject Obj, generate image when viewpoint position, direction, the field angle of described user User appointment are observed described subject Obj by the process that illustrates among the present embodiment 3-1.Then, in image-display units 5, show the image that is generated, user User is illustrated.
At this moment, described camera unit 3 can be arranged on the residing place of described user User and geographical approaching place, also can by network settings such as internet geographically away from the place.
In addition, for example shown in Figure 62, the video generation device 1 of present embodiment 3-1 not only can be applied to also can be applied in the two-way communication system as videophone or video conference in the fixing unidirectional image generation system of relation between user User and the subject Obj.
When the video generation device 2 with present embodiment 3-1 is applied in the intercommunication system, for example shown in Figure 63, there are camera unit 3A that takes user UserA and the camera unit 3B that takes user UserB to get final product.Like this, described user UserA uses the image of the user UserB that is photographed by described camera unit 3B to generate image when the viewpoint of expection is observed described user UserB, and is presented on the image-display units 4A.Similarly, described user UserB uses the image of the user UserA that is taken by described camera unit 2A to generate image when the viewpoint of expection is observed described user UserA, and is presented on the image-display units 4A.At this moment, shown in Figure 63, described video generation device 2 can be configured in the front of described each user UserA, UserB, also can only be arranged on the front of a certain side among user UserA, the UserB.And, if in internet or company, on the network 13 such as LAN described video generation device 2C is set, even then do not have described video generation device 2, can generate and show the image of seeing from described virtual view yet in the front of described each user UserA, UserB.
In addition, in Figure 63, show the example that the user is two people, generate but between more user, also can carry out the same image.Therefore, in the hypothesis Virtual Space that be used to communicate by letter different with the in esse real space of user, and when other users' corresponding with its position relation image is shown mutually, can make the user have the sensation that looks like the Virtual Space (cyber space) on community network.
In addition, the structure of the system shown in Figure 62 and Figure 63 shows an application examples of video generation device of the present invention, not necessarily is limited to such structure.That is, the configuration of each device and unit, mode, installation etc. can at random be set in the scope that does not break away from purport of the present invention.
(embodiment 3-2)
Figure 64 is the process flow diagram of processing that expression becomes the feature of embodiment 3-2.The virtual visual point image that illustrates in described embodiment 3-1 has been shown in present embodiment 3-2 has generated and handle, replaced the probability that exists of the subpoint determined by described step 10305f, and with the described probability transformation that exists be that transparency is carried out the example that image generates.
At this moment, for the structure of described video generation device 1 or whole processing procedure, can adopt with described embodiment 3-1 in the same mode of example that illustrates, therefore different parts only is described below.
In described embodiment 3-1, use the described probability β that exists that in described step 10305f, determines jFor example, use the colouring information of the each point on the image that above-mentioned formula 69 determines to see from described virtual view P, and generate described virtual visual point image, but in this case, as use Figure 50 illustrates, according to the shape or the relation of the position between described referenced viewpoints and the virtual view of described subject, the colouring information with original subject surface differs widely sometimes.Therefore, in present embodiment 3-2, the method as solving such problem illustrates that with the described probability transformation that exists be transparency, and mixes the method for the colouring information of described each subpoint according to the ratio corresponding with described transparency.At this moment, be that the step of transparency can be after the step 10305f in the processing of described step 103 with the described probability transformation that exists, carry out between perhaps described step 104 or described step 103 and the step 104.Therefore, in present embodiment 3-2, shown in Figure 64, be right after and add after determining the described step 10305f that has a probability that up conversion is described to exist probability and the step 105 of definite transparency.
In this case, in the step 10304 of guaranteeing the texture array of described embodiment 3-1, guaranteed to keep colouring information and the described texture array that has probability, with respect to this, in the step 10304 of present embodiment 3-2, guarantee to keep the texture array of colouring information and described transparency.
Based on the described probability β that exists jCalculate described transparency j, the same with the step 10305f of described embodiment 3-1, in present embodiment 3-2, also in described step 10305f, calculate the described probability that exists temporarily, and in next procedure 105, calculate transparency j
In addition, in the step 104 of playing up processing of present embodiment 3-2, above-mentioned formula 83 or formula 84 that replacement illustrates in described embodiment 3-1, and calculate D successively according to above-mentioned formula 11 to above-mentioned formula 13 jTherefore, certain pixel P on the image surface *(u p *, v p *) colouring information K j *As following formula 85, calculate.
[formula 85]
K P * = D M
= α M K M + ( 1 - α M ) α M - 1 K M - 1 + · · ·
+ ( 1 - α M ) ( 1 - α M - 1 ) · · · ( 1 - α 2 ) α 1 K 1
More than be the image generating method among the present embodiment 3-2, based on the described probability β that exists jCalculate transparency jMethod identical with the method that illustrates with reference to Figure 19 (b) in the first embodiment.
According to the image generating method of present embodiment 3-2, the same with described embodiment 3-1, can be easily and the inapparent virtual visual point image of generating portion image deterioration at high speed.
In addition, what illustrate among the embodiment 3-1 as described is such, in directly using the image generation that has probability, under the referenced viewpoints situation different with virtual view, sometimes near the occlusion area of subject, produce the increase of brightness, with respect to this, be in the image generation of transparency as present embodiment 3-2, there being probability transformation, the effect that prevents this phenomenon is arranged.Therefore, can obtain the virtual visual point image of the few and approaching actual subject of image deterioration.
In addition, what illustrate among the embodiment 3-1 as described is such, in directly using the image generation that has probability, under the referenced viewpoints situation different, when blend color information, do not include the assurance in the effective colouring information scope in, for example with virtual view, need treatment for correcting, with respect to this, as present embodiment 3-2 with as described in exist the probabilistic information to be transformed to transparency image generate, do not need such correction.Therefore, can simplified image generate processing.
In addition, as the virtual visual point image generating method of present embodiment 3-2, there being probability transformation is during the image of transparency generates, and has the subject that can show semi-transparent mistake effectively, can use effect of the present invention widely for the more subject in the real world.
In addition, the image generating method that illustrates among the present embodiment 3-2 is an example, and the purport of present embodiment 3-2 is to be that transparency generates virtual visual point image with the described probability transformation that exists.Therefore, not to break away from greatly in the scope of this purport not rely on specific computing method or processing procedure.
In addition, under the situation of the image generating method of present embodiment 3-2 too, obtained image can be a kind of in coloured image, the black white image, under the situation of black white image, use monochrome information (Y) conduct and the corresponding information of described colouring information, carry out getting final product as the hybrid processing that illustrates among the present embodiment 3-2.
Figure 65 is the synoptic diagram that is used for illustrating the another kind generation method of image generating method of the present invention.
In the image generating method of described embodiment 3-1 and embodiment 3-2, with when obtaining focusing, use general camera lens from different a plurality of image, when the projection of carrying out colouring information or the projection, be approximately prerequisite and be illustrated by pin hole (pin hole) optical system.But, as image generating method of the present invention, using focusing under the situation of different images, for example, obtain described a plurality of image if use the telecentric mirror head, then when the projection of carrying out colouring information or projection, shown in Figure 65, like that, can set parallel projection system.In this case, the some A on the image that can generate and perpendicular to the straight line and the described projecting plane L of image surface with passing through jIntersection point as corresponding point T j, obtain at each subpoint T by the process that illustrates among the described embodiment jOn have the probability β of subject jAt this moment, shown in Figure 65, texture is at subpoint T 1On corresponding point G 1The x component and the y component of coordinate be x component and the y component of the some G on the imageing sensor.Then, at obtained image I mg iOn, with described corresponding point G 1The part that the part of last reflection is identical is in by on the some G and the straight line perpendicular to described imageing sensor on the described imageing sensor, thus with these the point as corresponding point G iDetermine described subpoint T 1Colouring information K 1And to focal power Q 1And, can be for the subpoint T on the same straight line jCarry out after this processing, have probability β according to described subpoint focal power is obtained j
(effect of the 3rd embodiment)
As mentioned above, the image generating method of the 3rd embodiment is given the probability (having probability) that colouring information or monochrome information and described subject surface exist to a plurality of subpoints that the referenced viewpoints from the 3D shape that is used to obtain described subject is seen as coincidence.Promptly, be not as the existing method that obtains 3D shape, be thought of as from as described in referenced viewpoints be seen as a subpoint a plurality of subpoints of coincidence exist as described in the surface of subject, but the surface that is thought of as described subject is present on described each subpoint with the described probability that exists.Like this, even under the low situation of the reliability of distance estimations, also on the in esse subpoint of body surface, there is object surfaces with certain probability.At this moment, when generating the image of the described subject of seeing from described virtual view, in the point (pixel) on the image that is generated, have colouring information that the described colouring information that has a low subpoint of probability mixes or monochrome information pixel, be that point on the low subject of the reliability of distance estimations is described faintly.Consequently, the discontinuous noise that produces in the time of can making the distance estimations mistake is not remarkable.
In order to obtain the 3D shape of described subject, use a plurality of images different to focal power.At this moment, the probability that exists that is seen as each subpoint of coincidence from described referenced viewpoints is based on point (corresponding point) on described each image corresponding with described each subpoint focal power is determined.Therefore, shape or the relation of the position between described referenced viewpoints and the described virtual view according to object contain a plurality of subpoints that are seen as coincidence from described virtual view sometimes and have the very high subpoint of probability more than two.Under these circumstances, for example according to described when having colouring information that the corresponding ratio of probability mixes described each subpoint or monochrome information, the colouring information of the point on the image that is generated can surpass the scope of effective color space sometimes.Therefore, also can set transparency to described subpoint, and colouring information be carried out hybrid processing according to the ratio corresponding with described transparency based on the described probability that exists.Like this, the a plurality of subpoints that are seen as coincidence from described virtual view, have under the situation that has the high subpoint of probability more than two, can reduce from described virtual view be seen as certain subpoint of being positioned at a distance, the colouring information of subpoint that for example can't see from described virtual view is to the contribution of the colouring information that generates the point on the image.
Do not carry out as existing generation method to the subject of all shapes and as described in the processing of obtaining geometric model exactly of having a few on the subject.Therefore, can alleviate the load that the device (computing machine) that generates described image is applied.If can alleviate the load that the device that generates described image is applied, even the then general low device of the such handling property of popular personal computer also can generate image at high speed.
In addition, when obtaining the 3D shape of described subject, use passes through to change focusing under the situation of the image of taking from a viewpoint, the employed camera of the shooting of described image is compared with the existing device of taking from many viewpoints can miniaturization, and apparatus structure also can be simplified.At this moment, for example, show optical anisotropy, focusing from according to plane of polarization and the polarization-type two-value optical system of different materials is taken described subject, then can take focusing from two different images from a viewpoint if use to have.And, for example, if prepare a plurality of focusings from different camera lenses, at a high speed switch described each camera lens and on one side take on one side, then can take focusing from the different images more than three from a viewpoint.
[the 4th embodiment]
The 4th embodiment of the present invention then is described.The 4th embodiment mainly is the embodiment corresponding with claim 30~claim 43.The 4th embodiment is characterised in that, based at metewand value v jStatistical treatment (parameter fitting (parameter fitting)) obtain and have probability.In addition, at the figure that is used for illustrating the 4th embodiment, the part with identical function is marked with same numeral.
In the three-dimensional image display method of the 4th embodiment, according under mutually different shooting condition, subject being taken a plurality of images that get, obtain the 3D shape of described subject, and, has the three-dimensional image that shows described subject on the display of a plurality of display surfaces like that at DFD based on the 3D shape of obtained described subject.When obtaining the 3D shape of described subject, on virtual three dimensions, set the projecting plane of sandwich construction, be seen as the point (subpoint) on a plurality of projecting planes of coincidence for viewpoint, determine the colouring information or the monochrome information of described each subpoint and the probability (having probability) that on described subpoint, has the surface of subject from the observer.Then, when the 3D shape based on obtained described subject is created on the two dimensional image that shows on described a plurality of display surface, point on the two dimensional image of the colouring information that distributes described subpoint or monochrome information is distributed described colouring information or monochrome information and has probability, when on described picture display face, showing, show each point on the two dimensional image with the brightness corresponding with the described height that has a probability.Like this, show faintly and the relevant low part of computed reliability of distance on the surface of described subject, be illustrated in the observer and it seems than natural three-dimensional image.
(embodiment 4-1)
Figure 66 to Figure 77 is the synoptic diagram that is used to illustrate the image generating method of embodiment 4-1, Figure 66 is the process flow diagram of an example of the whole processing procedure of expression, Figure 67 and Figure 68 are the figure of an example of the establishing method on expression projecting plane, Figure 69 is the figure of the establishing method of explanation subpoint string, Figure 70 is the process flow diagram of an example of processing procedure that expression is determined the colouring information of subpoint and had the step of probability, Figure 71 to Figure 74 is the figure that there is method of determining probability in explanation, and Figure 75 to Figure 77 is the figure that the generation method of two dimensional image shown on each picture display face is described.
The image generating method of present embodiment 4-1 be created on DFD for example such have a method that is seen as the image that the image-display units of a plurality of picture display faces that overlap at depth direction shows from the observer, shown in Figure 66, have: obtain from different viewpoint shot objects and the step 101 of a plurality of images; The step 102 of observer's viewpoint (referenced viewpoints) of the three-dimensional image of setting observation post objects displayed; Obtain the step 103 of the 3D shape of described object according to described a plurality of images; Based on the 3D shape of the described object of in described step 103, obtaining, be created on the step 104 of the two dimensional image that shows on each picture display face; And thereby each two dimensional image that is presented at generation in the described step 104 on described each picture display face illustrates the step 105 of the three-dimensional image of described object.
For example generate when being used for showing the image of three-dimensional image of object at the image generating method that uses present embodiment 4-1, at first obtain the image (step 101) of taking described object from different viewpoints by described DFD.At this moment, take the viewpoint of the described image of obtaining, for example can be linearity be arranged as row, also can be arranged in circular-arc two-dimensionally or arbitrarily on the curve, perhaps on plane or the curved surface.And, at this moment, obtained image can be coloured image or black white image, but in present embodiment 4-1, is made as the coloured image of obtaining the each point (pixel) on red to use (R), green (G), blue (B) trichromatic colouring information represent images and describes.
After in described step 1, having obtained image, then set observer's viewpoint (step 102) of observing object shown among the described DFD.At this moment, about described observer's viewpoint, for example set described observer's viewpoint and the relative position relation between the described picture display face or the direction of sight line etc. of the distance etc. of the picture display face that becomes benchmark in described a plurality of picture display faces.
In described step 2, set after observer's the viewpoint,, obtained the 3D shape (step 103) of the object that in described image, reflects then according to a plurality of images of in described step 1, obtaining.In described step 3, at first, set the projecting plane L of sandwich construction j(j=1,2 ..., M) (step 10301).Then, set the referenced viewpoints R (step 10302) of the 3D shape be used to obtain described object.At this moment, described projecting plane L jFor example shown in Figure 67 like that, set parallel plane of XY on a plurality of and virtual three dimensions.And, at this moment, described each projecting plane L jFor example shown in Figure 67 and Figure 68 like that, be set in Z=0 on the described three dimensions of distance and be negative direction apart from l jThe place.And described referenced viewpoints R is the viewpoint that is used to obtain the 3D shape of described object, can be set at the point arbitrarily on the three dimensions.Therefore, described referenced viewpoints R is made as observer's viewpoint of setting in described step 2, for example, and will be from described Z=0 projecting plane L farthest 1As described DFD be positioned at the most inboard picture display face In the view of the observer, set for shown in Figure 68 like that, from described projecting plane L 1Distance become from described observer's viewpoint to described DFD be positioned at the most inboard picture display face apart from ld.
In described step 10301 and step 10302, set described projecting plane L jAnd after the referenced viewpoints R, then set subpoint on the described projecting plane and the point (steps 10303) such as (corresponding point) on the described image of obtaining corresponding with described each subpoint.At this moment, described subpoint is for example such shown in Figure 69, is set at from described referenced viewpoints R and draws straight line, described each straight line and described each projecting plane L to a plurality of directions jIntersection point.And, since the surface of estimating described subject apart from the time, for a plurality of subpoint T on the described same straight line jWhich estimate to be present on the subpoint, so shown in Figure 69, with the subpoint T on the same straight line jS handles together as the subpoint string.
In addition, described corresponding point like that, are from described subpoint T shown in Figure 67 and Figure 68 jSee the viewpoint C of described each video camera iThe time, the some G on the image surface of described each video camera that overlaps with the viewpoint of described video camera IjAt this moment, shown in Figure 67, when on described each image surface, setting the coordinate system (xy coordinate system) of two dimension, with described subpoint T j(X j, Y j, Z j) corresponding corresponding point G IjTwo-dimensional coordinate (x Ij, y Ij) can pass through described subpoint T jProject on the point of the two dimension on each image surface and obtain.This sciagraphy can use that general ((x, y) projection matrix of 3 row, 4 row on carries out Z) to project to point on the two dimensional surface for X, Y with the point on the three dimensions.The described virtual interior described corresponding point G of three dimensions IjCoordinate (x Ij, y Ij) and described digital picture coordinate (u, v) the relation between and before this explanation the same.
In described step 10303, set described corresponding point G IjDigital picture coordinate (u Ij, v Ij) and described subpoint T jThree dimensional space coordinate (X j, Y j, Z j) between corresponding relation.This corresponding relation can be at all (u Ij, v Ij) with (X j, Y j, Z j) value be set at table, also can be only at representational (u Ij, v Ij) setting (X j, Y j, Z j) value, other point for example can be obtained by the interpolation processing of linear interpolation etc.
In addition, in described digital picture coordinate system, (u, v) quantize, but in the following description, successive value is then got in short of special explanation, when the visit two-dimensional array, carried out suitable discretize and handle.
In described step 10303, determined then to guarantee to store described projecting plane L after described subpoint string, the corresponding point etc. jInformation, promptly will be at described projecting plane L jOn carry out the array (texture array) (step 10304) of the image of texture.At this moment, in the array of being guaranteed, as with described subpoint T jThe texture information of position correspondence, for example have 8 colouring information and have probabilistic information for each pixel.
In described step 10304, guaranteed to store after the array of information on described projecting plane, then determined each subpoint T jColouring information and have probability (step 10305).In described step 10305, for example such shown in Figure 70, carry out for each the subpoint T on definite repeatedly certain the subpoint string of all subpoint strings that set jColouring information and the dual circular treatment that has the processing of probability.Therefore, the described subpoint string of initialization (step 10305a) at first.Then, the subpoint T on the described subpoint string of initialization j, for example be made as j=1 (step 10305b).
Then, determine described subpoint T jColouring information (step 10305c).In described step 10305c, each corresponding point G that will in described step 10303, set for example iColouring information K iMean value be defined as described subpoint T jColouring information K j
Then, obtain with described subpoint T jEach corresponding corresponding point G Ij(i ∈ I) goes up the degree of correlation Q of the point on the object that reflects j(step 10305d).At this moment, for example, will represent described subpoint T jThe vector of colouring information be made as K j, will represent each corresponding point G IjThe vector of colouring information be made as K IjThe time, described degree of correlation Q jObtain by following formula 86.
[formula 86]
Q j = Σ i ∈ I ( K j - K ij ) 2
State formula 86 in the use and obtain degree of correlation Q jSituation under, described degree of correlation Q jAlways get on the occasion of, and relevant high more then value is more little.
In addition, above-mentioned formula 86 is described degree of correlation Q jAn example of the method for obtaining, also can use the formula beyond the above-mentioned formula 86 to obtain described degree of correlation Q jAnd, obtaining described degree of correlation Q jThe time, not only can consider described subpoint T jWith described corresponding point G Ija bit obtain, also can consider to comprise described subpoint T jAnd described corresponding point G IjNeighbouring a plurality ofly obtain in interior zonule.
Obtain described degree of correlation Q by described step 10305d jAfterwards, upgrade described subpoint T j, confirm whether carried out the processing (step 10305e) of described step 10305c and step 10305d for all subpoints on the subpoint string that becomes process object.Here, if there is the subpoint of the processing of not carrying out described step 10305c and step 10305d, then returns described step 10305c and obtain described colouring information K jAnd degree of correlation Q j
If for having obtained described colouring information and degree of correlation Q as all subpoints on the subpoint string of process object j, then such shown in Figure 71, for each the subpoint T on the subpoint string jGive colouring information K jAnd degree of correlation Q jAt this moment, more described each subpoint T jDegree of correlation Q jThe time, general such shown in Figure 72 (a), certain subpoint T is only arranged mDegree of correlation Q mGet distinctive little value.Under these circumstances, can on this subpoint string, estimate that described object surfaces is positioned at described subpoint T mOn, its reliability is also higher.
But, according to the pattern (texture) on the shape of object or surface or shooting condition etc., each the subpoint T on compared projections point is gone here and there jDegree of correlation Q jThe time, shown in Figure 72 (b), like that, do not exist the degree of correlation to have the subpoint of distinctive little value sometimes.Under these circumstances, even be estimated as described object surfaces on a certain subpoint, its reliability is still lower, and the situation of misjudgment is arranged.And under the situation of misjudgment, its influence shows as big noise on the image that is generated.
Therefore, based on described each subpoint T jDegree of correlation Q jHeight, determine each the subpoint T on described subpoint string jProbability (the having probability) β of existence surface, place jAt this moment, the described probability β that exists jCan be according to described degree of correlation Q jDirectly obtain, if but noise, described degree of correlation Q are arranged on the obtained image jReliability low, the then described probability β that exists jAlso be subjected to its influence, reliability is low.Therefore, in the image generating method of present embodiment 4-1, at first, obtain as the described probability β that exists jThe metewand value v that uses of reference value j(step 10305f).At this moment, described metewand value v jNeed satisfy following formula 87 and formula 88.
[formula 87]
0≤β j≤1
[formula 88]
Σ j = 1 M β j = 1
And, at described subpoint T jThe high more then described metewand value v of the probability of last existence surface jWhen being taken as more near 1 value, for example, can be at each the subpoint T on the described subpoint string jThe degree of correlation Q that obtains j, carry out obtaining described metewand value v by the conversion process of following formula 89 and formula 90 expressions j(j ∈ J).
[formula 89]
β ~ j = 1 Q j
[formula 90]
β j = β j Σ j = 1 M β ~ j
In addition, basically, described metewand value v jAs long as satisfy the condition of above-mentioned formula 87 and formula 88.Therefore, described conversion process also can use above-mentioned formula 89 and formula 90 formula in addition to calculate.
As mentioned above, each subpoint T that uses above-mentioned formula 89 and formula 90 to calculate jMetewand value v jAlso can be used as the probability (having probability) that described body surface exists, but because the The noise on the obtained image uses directly then appearance as the not enough situation of reliability that has probability.Therefore, then the probability Distribution Model of the object that generated of hypothesis is carried out at described metewand value v jStatistical treatment (parameter fitting), for example obtain the fitting function p (1) (step 10305g) shown in Figure 73 (a).
Here, if the probability density distribution that is assumed to be the probability that described object exists for distance l Normal Distribution (Gaussian distribution), then described metewand value v jFitting function p (1) can be expressed as the formula of stating 91.
[formula 91]
p ( l ) = 1 2 π σ exp { - ( 1 - μ ) 2 2 σ 2 }
Here, μ is the mean value that has probability distribution, and there is variance of probability distribution in σ, is provided by following formula 92, formula 93 respectively.
[formula 92]
μ = Σ j = 1 M β j l j
[formula 93]
σ 2 = Σ j = 1 M ( β j l j - μ ) 2
Obtaining described fitting function p (l) afterwards, determining each projecting plane LP according to this function p (l) 1Apart from lp j, promptly at described each corresponding point T jOn have the probability β of object j(step 10305h).At this moment, the described probability β that exists jFor example use following formula 94 to determine.
[formula 94]
γ j = ∫ l j - l j + p ( l ) dl
Here, shown in Figure 73 (b), l j -, l j +Be respectively to projecting plane L jThe lower limit of the distance of making contributions and the upper limit for example, are provided by following formula 95 and formula 96.
[formula 95]
l j - = l j - 1 + l j 2 , l j - = - ∞
[formula 96]
l j + = l j + l j + 1 2 , l M + = ∞
When carrying out processing, shown in Figure 74, like that, determined described each the subpoint T on the subpoint string from described step 10305c to step 10305h jColouring information K jAnd there is a probability β jIts value is stored in the zone of guaranteeing in the described step 10304.
Like this, under the situation of the probability density distribution of the probability that can suppose on certain subpoint string, to exist shown body surface, if based on described metewand value v jFitting function p (l) obtain the described probability β that exists j, then can reduce the The noise of captured image.
In addition, above-mentioned formula 91 is examples of fitting function, also can use the various functions corresponding with the distribution of shapes of object, for example laplacian distribution function to carry out parameter fitting.
Storing described each subpoint T jColouring information K jAnd there is a probability β jAfterwards, upgrade described subpoint string, and confirm whether carried out the processing (step 10305i) of described step 10305c to step 10305h for all subpoint strings of determining in the step 10303.Here, do not carry out the subpoint string of described step 10305c, then return described step 10305b, repeat processing from described step 10305c to step 10305h to the processing of step 10305h if exist.
Like this, when having carried out processing from described step 10305c to step 10305h for all subpoint strings of determining in the described step 10303, the processing of described step 10305 (step 103) finishes, and obtains the 3D shape of described object.
When carrying out the processing of described step 103, for example such shown in Figure 74, each the subpoint T on certain subpoint string j(j=1,2 ..., maintain colouring information K in texture array M) jAnd there is a probability β jThat is, the 3D shape of the object of obtaining by three-dimensional image display method of the present invention be not as existing method as described in a certain subpoint place on the subpoint string have object surfaces, but exist at described each subpoint place.
In the image generating method of present embodiment 4-1,, generate the image of the described subject of seeing from described observer based on the 3D shape of the described subject that in described step 103, obtains.In present embodiment 4-1, the two dimensional image generation method that shows the image that is generated on described each picture display face of the such display with a plurality of picture display faces of DFD has been described.In this case, after the processing of described step 103 finishes, shown in Figure 66 like that, with the colouring information of described subpoint and to have probability transformation be the colouring information and the luminance distribution coefficient (step 104) of the point on the two dimensional image generation face.
In described step 104, when generating the two dimensional image that will on described each picture display face, show, at first, set on described virtual three dimensions that observer's viewpoint, a plurality of two dimensional image generate face, the 3D shape of the described object obtained in described step 103.At this moment, for example shown in Figure 75, described two dimensional image generates face LD n(n=1,2 ..., N) be set to be seen as at depth direction and overlap from described observer's viewpoint P.And, generate face LD to described each two dimensional image from described observer's viewpoint P nApart from ld nBe set at the distance of in described step 102, setting.And, at this moment, if the projecting plane L of the 3D shape of the described object of hypothesis performance jNumber and set at interval and described two dimensional image generates face LD nNumber and set consistent at interval, then the 3D shape of described object for example shown in Figure 75 like that, be set at described projecting plane L jGenerate face LD with described two dimensional image nConsistent.At this moment, generate face LD if suppose described two dimensional image nBe the face that generates image shown on each picture display face of intensification modulation type DFD, the described two dimensional image that then need be seen as coincidence for the viewpoint P from the observer generates face LD nOn each point (display dot) A nDetermine colouring information KD nAnd luminance distribution coefficient gamma nHere, shown in Figure 75, if show the projecting plane L of the 3D shape of described object jGenerate face LD with two dimensional image nUnanimity, then described each display dot A nColouring information KD nBe made as and described each display dot A nResiding two dimensional image generates face LD nThe projecting plane L that overlaps jSubpoint T jColouring information K jAnd, described each display dot A nLuminance distribution rate γ nDistribute and described each display dot A nResiding two dimensional image generates face LD nThe projecting plane L that overlaps jSubpoint T jHave a probability β jLike this, generating face LD for described two dimensional image nOn each display dot A determined after colouring information KD and the luminance distribution coefficient gamma that output generates face LD at this two dimensional image nThe image of last generation, and on the picture display face of the DFD of reality, show (step 105).
But, show the projecting plane L of the 3D shape of described object jNumber and set at interval and needn't generate face LD with described two dimensional image nNumber and set consistent at interval.Therefore, projecting plane L then is described jNumber and set at interval and described two dimensional image generation face LD nNumber and set described two dimensional image generation method under the inconsistent situation at interval.
At this moment, if be seen as distance and generate distance that face generates face to the most inboard two dimensional image about equally from described observer's viewpoint P, then show the projecting plane L of the 3D shape of described object from the most forward two dimensional image from the most forward projecting plane to the most inboard projecting plane jFor example such shown in Figure 76, be set at from described observer's viewpoint P and be seen as the most inboard projecting plane L jGenerate face LD with two dimensional image 1Overlap.Like this, the viewpoint P from described observer is seen as the most inboard two dimensional image generation face LD 1The colouring information KD of each display dot A and luminance distribution coefficient gamma be that viewpoint P from described observer is seen as the most inboard projecting plane L 1On each subpoint T colouring information K and have probability β.
In addition, about colouring information KD and the luminance distribution coefficient gamma of each the display dot A on the two dimensional image generation face LD that does not overlap, determine by following method with the projecting plane.
Generate colouring information KD and the luminance distribution coefficient gamma of each the display dot A on the face LD about the two dimensional image on the projecting plane that do not have described coincidence, for example, generate display dot A on the face LD and distribute the colouring information K of the subpoint T from described each projecting plane L that viewpoint P is seen as with described each display dot A overlaps of described observer and have probability β be seen as nearest two dimensional image from described projecting plane L.At this moment, the colouring information KD of described display dot A is made as the mean value of the colouring information K of each the subpoint T that is distributed, perhaps generates the colouring information K that face LD is seen as the subpoint T of nearest projecting plane L from the residing two dimensional image of described display dot A.In addition, about the luminance distribution coefficient gamma, be made as described distribution each subpoint T have a probability β sum.At this moment, generate face LD becoming from certain secondary image nThe projecting plane L of nearest generation face jSet be made as { L j| j ∈ Γ nThe time, use described each projecting plane L jSubpoint T jHave a probability β jProvide described two dimensional image to generate face LD by following formula 97 nOn display dot A nLuminance distribution rate γ n
[formula 97]
γ h = Σ j ∈ Γ n M β j
Here, consider described projecting plane L jGenerate face LD with two dimensional image nBe in the situation of the position relation shown in Figure 77 (a).Here, if hypothesis distribute from observer's viewpoint P and be seen as and display dot A be seen as display dot A on the nearest two dimensional image generation face from described each projecting plane 1, A 2Each the subpoint T that overlaps jThe colouring information K of (j=1,2,3,4,5) jAnd there is a probability β j, subpoint T then 1, T 2, T 3Colouring information and exist probability to be assigned to described display dot A 1At this moment, described display dot A 1Colouring information KD 1For example can be described subpoint T 1, T 2, T 3Colouring information K 1, K 2, K 3Mean value, also can be from described display dot A 1Be seen as nearest subpoint T 2Colouring information K 2In addition, use above-mentioned formula 91, described display dot A 1The luminance distribution coefficient gamma 1Be defined as described each subpoint T 1, T 2, T 3Have a probability β 1, β 2, β 3Sum.
Similarly, described subpoint T 4, T 5Colouring information and exist probability to be assigned to two dimensional image to generate face LD 2, display dot A 2Colouring information KD 2Be made as described subpoint T 4, T 5Colouring information K 4, K 5Mean value or subpoint T 5Colouring information K 5And, about the luminance distribution coefficient gamma 2, use above-mentioned formula 91 to be made as described each subpoint T 4, T 5Have a probability β 4, β 5Sum.
In addition, described two dimensional image generates face LD nBe provided with at interval and described projecting plane L jSetting at interval different, two continuous two dimensional images generate face LD n, LD N+1Between projecting plane L jSubpoint colouring information and exist the probability also can be according to described projecting plane L jGenerate face LD with each two dimensional image n, LD N+1Distance ratio and distribute.At this moment, if described two dimensional image is generated face LD n, LD N+1Between projecting plane L jSet be made as { L j| j ∈ Γ n, then use described each projecting plane L jHave a probability β j, provide described two dimensional image to generate face LD by following formula 98 nOn display dot A nLuminance distribution rate γ n
[formula 98]
γ h = Σ j ∈ Γ n w j , h β j
In above-mentioned formula 98, w J, hBe expression projecting plane L jGenerate face LD for image nThe coefficient of percentage contribution.
Here, for example shown in Figure 77 (b), consider to generate face LD at two two dimensional images 1, LD 2Between set projecting plane L 1, L 2Situation.At this moment, if establish from projecting plane L 1To described each picture display face LD 1, LD 2Distance be respectively B 1, B 2, then described projecting plane L 1Described each two dimensional image is generated face LD 1, LD 2Percentage contribution be w 1,1, w 1,2, for example provide by following formula 99.
[formula 99]
w 1,1 = B 2 B 1 + B 2 , w 1,2 = B 1 B 1 + B 2
Similarly, if from projecting plane L 2Generate face LD to described each two dimensional image 1, LD 2Distance be respectively B 3, B 4, then described projecting plane L 2Generate face LD for described each two dimensional image 1, LD 2Percentage contribution W 2,1, W 2,2Provide by following formula 100.
[formula 100]
w 2,1 = B 4 B 3 + B 4 , w 2,2 = B 3 B 3 + B 4
Consequently, described two dimensional image generates face LD 1Display dot A 1Luminance distribution rate γ 1And described display surface LD 2Display dot A 2Luminance distribution rate γ 2Respectively shown in following formula 101.
[formula 101]
γ 1=w 1,1β 1+w 2,1β 2,γ 2=w 1,2β 1+w 2,2β 2
Like this, when obtaining the 3D shape of described object, if obtained according to each the subpoint T on the described subpoint string jDegree of correlation Q jGive at described each subpoint T jProbability (the having probability) β of last existence surface jShape, and according to the described probability β that exists jGive the luminance distribution coefficient that described two dimensional image generates the display dot A on the face LD, then each the subpoint T on described subpoint string jIn do not have degree of correlation Q with distinctive value jSubpoint, and under the low situation of the reliability of the distance estimations of body surface, on this subpoint string, described object surfaces shows on a plurality of projecting planes faintly.And, if according to described each subpoint T jThe probability β that exists determine that described two dimensional image generates the luminance distribution coefficient gamma of the point on the face LD, when then on the picture display face of reality, showing two dimensional image that generates on the described two dimensional image generation face and the three-dimensional image that object is shown, low at computed reliability for distance, and the described probability β that exists is dispersed on the subpoint string on a plurality of subpoints, shows described object surfaces faintly.Therefore, the noise that described DFD goes up on the three-dimensional image that shows becomes not remarkable, may be displayed on the observer and it seems than natural picture.
As discussed above, according to the image generating method of present embodiment 4-1,, also may be displayed on the observer and it seems than natural three-dimensional image even do not obtain the 3D shape accurately of shown object.
In addition, obtaining the described probability β that exists jThe time, can suppose on the subpoint string, to exist the probability density distribution of the probability of object surfaces, carry out according to described degree of correlation Q jThe metewand value v that calculates jStatistical treatment obtain, reduce the described probability β that exists of caused by noise on the obtained image thus jReliability low.
In addition, in the image generating method of present embodiment 4-1, be illustrated as example with the coloured image of obtaining the point (pixel) on red to use (R), green (G), blue (B) trichromatic colouring information represent images and the situation that forms the 3D shape of described object, but in the method for displaying image of present embodiment 4-1, be not limited to described coloured image, also can obtain and use brightness (Y), aberration (U, V) come the black white image of the each point (pixel) on the represent images, and obtain the 3D shape of described object.At obtained image is under the situation of described black white image, should use described monochrome information (Y) as the information suitable with described colouring information, obtains 3D shape by the process that illustrates among the present embodiment 4-1, and generates described two dimensional image.
(embodiment 4-2)
Figure 78 to Figure 81 is the synoptic diagram that is used to illustrate the image generating method of embodiment 4-2, Figure 78 is the figure of the relation between expression subpoint and the corresponding point, Figure 79 is the process flow diagram of an example of the colouring information of the definite subpoint of expression and the step that has probability, and Figure 80 and Figure 81 are the figure that there is the method for obtaining of probability in explanation.
The flow process of the basic process of the image generating method of present embodiment 4-2 is the same with the image generating method of described embodiment 4-1, during a plurality of two dimensional image of on generating described DFD, showing, carry out shown in Figure 66 from as described in step 101 to the processing of step 105.In the image generating method of present embodiment 4-2, be with the difference of the image generating method of described embodiment 4-1, in described step 101, replace the different a plurality of images of viewpoint, and obtain focusing from different a plurality of images, in described step 103, use described focusing to obtain the 3D shape of described object from different images.
When the image generating method of use present embodiment 4-2 generates a plurality of two dimensional image that for example is used for the three-dimensional image of demonstration object on described DFD, at first obtain from certain viewpoint by changing focusing from a plurality of images of taking.At this moment, for example use polarization-type two-value optical system and zoom lens to take described a plurality of image.And obtained image can be the same with described embodiment 4-1, is coloured image, also can be black white image.Then, illustrate among the embodiment 4-1 as described like that, in the viewpoint (step 102) of having set the observer afterwards, obtain the processing of step 103 of the 3D shape of described object.
In the processing of described step 103, what illustrate among the embodiment 4-1 as described is such, at first sets described projecting plane L j(j=1,2 ..., M) and referenced viewpoints R (step 10301, step 10302).Then, set described subpoint string or corresponding point, guarantee to store the array (zone) (step 10303, step 10304) of the information on described projecting plane.
As the three-dimensional image display method of present embodiment 4-2, using focusing to show from different a plurality of images under the situation of three-dimensional image of object, in described step 10301, set projecting plane L jThe time, for example shown in Figure 78 like that, be set at focusing from the distance of described video camera viewpoint C and the image of taking by described video camera from f i(i=1,2 ..., N) unanimity.And, in described step 10303, with described subpoint T jCorresponding corresponding point G iBe made as from described video camera viewpoint C and see described subpoint T jThe time and described subpoint T jThe image I mg that overlaps iOn the point.In addition, about the establishing method of subpoint string, described subpoint T jCoordinate and corresponding point G iThe correspondence of digital picture coordinate because can set up by the method the same with the method that illustrates among the described embodiment 4-1 corresponding, so omit detailed explanation.
In addition, the processing that described step 10304 is preserved the zone of storage projecting plane information really also carry out with described embodiment 4-1 in the same processing of processing that illustrates, so omit detailed explanation.
Then, use described a plurality of images of obtaining to determine described each subpoint T jColouring information and have probabilistic information (step 10305).In the three-dimensional image display method of present embodiment 4-2 too, in described step 10305, for example shown in Figure 79 like that, carry out determining each subpoint T on certain subpoint string repeatedly at all subpoint strings that set jColouring information and the dual circular treatment that has the processing of probability.Therefore, the described subpoint string of initialization (step 10305a) at first.Then, the subpoint T on the described subpoint string of initialization j, for example be made as j=1 (step 10305b).
Then, determine described subpoint T jColouring information (step 10305c).In described step 10305c, each corresponding point G that will in described step 10303, set for example iThe mean value of colouring information be defined as described subpoint T jColouring information K j
Then, according to described subpoint T jEach corresponding corresponding point G iThe focusing degree (to focal power) of the point on the object that is reflected is obtained described subpoint T jTo focal power Q j(step 10305j).Clear or fuzzy degree according to point on the image or the image in the tiny area is determined focal power.As mentioned above, described computing method to focal power have the whole bag of tricks based on Depth from Focus theory or Depthfrom Defocus theory.At this moment, for example by more described each corresponding point G iThe size of local space frequency obtain described to focal power Q j
Described Depth from Focus theory or Depth from Defocus theory are to analyze the method for focusing from different a plurality of images and the described object surfaces shape of instrumentation.At this moment, for example can estimate: with change described focusing from and there is described object surfaces in the focusing of the image that the local space frequency is the highest in the image taken from corresponding distance.Therefore, for example use the local space frequency evaluation function of following formula 102 expressions to calculate described subpoint T jTo focal power Q j
[formula 102]
Q = 1 D Σ x = x i x f Σ y = y i y f { Σ p = - L c L c Σ q = - L r L r | f ( x , y ) - f ( x + p , y + q ) | }
Here, f is the deep or light value of pixel, and D is the constant that is used to carry out normalization, is all pixel counts of estimating, (Lc ,-Lr)-(Lc, Lr) and (xi, yi)-(xf yf) is the zonule that is used to disperse to estimate with smoothing respectively.
In addition, above-mentioned formula 102 is described to focal power Q jAn example of the method for obtaining, also can use the formula beyond the above-mentioned formula 102 to obtain described to focal power Q j
Obtain by described step 10305j described to focal power Q jAfterwards, upgrade described subpoint T j, confirm whether carried out the processing (step 10305e) of described step 10305c and step 10305j for all subpoints on the subpoint string that becomes process object.Here, if there is the subpoint of the processing of not carrying out described step 10305c and step 10305j, then returns described step 10305c and obtain described colouring information K jAnd to focal power Q j
For obtaining described colouring information as all subpoints on the subpoint string of process object and to focal power Q jThe time, such shown in Figure 80, for each the subpoint T on the subpoint string jGiven colouring information K jAnd to focal power Q jAt this moment, described each subpoint T jTo focal power Q jBe with described embodiment 4-1 in the suitable degree of the degree of correlation used when determining to have probability β, according to the pattern (texture) on the shape of object and surface or shooting condition etc., each the subpoint T on compared projections point is gone here and there jTo focal power Q jThe time, do not exist the degree of correlation to have the subpoint of distinctive little value sometimes.Under these circumstances, be on a certain subpoint even be estimated as described object surfaces, its reliability is still lower, and the situation of misjudgment is arranged.And under the situation of misjudgment, this influence shows as big noise on the image that is generated.
Therefore, then such shown in Figure 81 in three-dimensional image display method of the present invention, determine each the subpoint T on described subpoint string jOn have probability (the having probability) β of object surfaces jAt this moment, what illustrate among the embodiment 4-1 as described is such, low in order to prevent the caused by noise reliability on the obtained image, is carrying out at metewand value v jStatistical treatment after determine the described probability β that exists j(step 10305f).In described step 10305f, calculate metewand value v jThe time, described metewand value v jNeed satisfy above-mentioned formula 87 and formula 88.Therefore, in present embodiment 4-2, for example use following formula 103 to determine subpoint T kMetewand value v j
[formula 103]
β k = Q k Σ j = 1 M Q j
In addition, basically, described metewand value v jThe condition that satisfies above-mentioned formula 87 and formula 88 gets final product.Therefore, described metewand value v jAlso can use above-mentioned formula 97 formula in addition to determine.
In described step 10305f, calculate metewand value v jAfterwards, then, carry out parameter fitting, shown in Figure 81, like that, determine described each subpoint T by above-mentioned process jHave a probability β j( step 10305g, 10305h).
Determining described each subpoint T by described step 10305h jHave a probability β jAfterwards, with described each subpoint T jColouring information K jAnd there is a probability β jBe stored in the zone of guaranteeing in the described step 10304.
Storing described each subpoint T jColouring information K jAnd there is a probability β jAfterwards, upgrade described subpoint string, confirm whether carried out the processing (step 10305i) of described step 10305c to step 10305h for all subpoint strings of determining in the described step 10303.Here, do not carry out the subpoint string of described step 10305c, then return described step 10305b, and repeat processing from described step 10305c to step 10305h to the processing of step 10305h if exist.
Like this, if carried out processing from described step 10305c to step 10305h at all subpoint strings of determining in the described step 10303, the processing of then described step 10305 finishes, and obtains the 3D shape of described object.Then, after the processing by described step 103 has obtained the 3D shape of described object, by with the same process of described embodiment 4-1, determine that based on the described object dimensional shape that obtains described two dimensional image generates colouring information and the luminance distribution coefficient gamma of the display dot A on the face LD, and be created on the two dimensional image (step 104) that shows on the picture display face of the such a plurality of coincidences of DFD, and when on the picture display face of reality, showing the image (step 105) that is generated, the three-dimensional image of described object can be shown.
The three-dimensional image display method of the present embodiment 4-2 also three-dimensional image display method with described embodiment 4-1 is the same, in the 3D shape of obtained described object, and each the subpoint T on described subpoint string jIn do not exist have distinctive value to focal power Q jSubpoint and under the low situation of the reliability of the distance estimations of body surface, on this subpoint string, the existing described object surfaces in the fuzzy face of land on a plurality of projecting planes.And, if according to described each subpoint T jThe probability β that exists determine that described two dimensional image generates the luminance distribution coefficient gamma of the point on the face LD, when then on the picture display face of reality, being presented at two dimensional image that generates on the described two dimensional image generation face and the three-dimensional image that object is shown, reliability for distance estimations is low, described to exist probability β to be dispersed on the subpoint string on a plurality of subpoints, shows described object surfaces faintly.Therefore, the noise that described DFD goes up on the three-dimensional image that shows is not remarkable, may be displayed on the observer and it seems than natural picture.
As discussed above, the same according to the three-dimensional image display method of present embodiment 4-2 with described embodiment 4-1, even do not ask the 3D shape accurately of object, also can show the 3D shape that looks natural.
In addition, under the situation of the image generating method of present embodiment 4-2, obtained image also can be a kind of in coloured image, the black white image, under the situation of black white image, can use monochrome information (Y) as the information suitable, carry out the processing that illustrates among the present embodiment 4-2 with described colouring information.
(embodiment 4-3)
Figure 82 to 84 is the synoptic diagram that are used to illustrate based on the arbitrfary point image generating method of embodiments of the invention 4-3, and Figure 82 is the process flow diagram of an example of the whole processing procedure of expression, and Figure 83 is the figure that the principle of playing up is described.Figure 84 (a) and Figure 84 (b) are that will to have probability transformation be the process flow diagram of an example of the processing procedure of transparency in expression.
In described embodiment 4-1 and embodiment 4-2, enumerate the 3D shape of using the described subject that in described step 103, obtains, generated the example that has the method for the two dimensional image that shows on described each picture display face of device of a plurality of picture display faces at described DFD like that, but the three-dimensional shape model of described subject is not limited thereto, and also can use when generating the two dimensional image of the described subject of seeing from viewpoint arbitrarily.At this moment, be with the difference of described embodiment 4-1 and described embodiment 4-2, like that, the 3D shape of after described step 103, playing up, be about to described subject changes the step 106 of the two dimensional image of seeing from described observer's viewpoint into shown in Figure 82.At this moment, owing to from described step 101 to step 103, obtain illustrate among processing and the described embodiment 4-1 and the embodiment 4-2 of 3D shape of described subject the same, so omit detailed explanation.
In addition, in any visual point image generating method of present embodiment 4-3, the step of playing up 106 is for example as shown in Figure 83, by the viewpoint P from described observer being seen as the subpoint T that overlaps with some A on described any visual point image j(j=1,2 ..., colouring information K M) jCarry out hybrid processing, determine the colouring information of the each point (pixel) on described any visual point image to be shown.For using transparency jThe colouring information hybrid processing, as explanation among the embodiment 2-2 in second embodiment etc.
In any visual point image generating method of present embodiment 4-3, also for example such shown in Figure 84 (a), at definite described probability β that exists j Step 10305h after, carry out the described probability v that exists jBe transformed to transparency jProcessing (step 107).
With the described probability v that exists jBe transformed to transparency jProcessing for example shown in Figure 84 (b) like that, at first with subpoint T jInitialization and be made as j=M (step 107a).Then, with described subpoint T MTransparency MBe made as α MM(step 107b).
Then, the value with variable j is updated to j=j-1 (step 107c).Then, differentiate α J+1Whether be 1 (step 107d).Here, if transparency J+1Be α J+1≠ 1, then obtain described transparency according to for example following formula 104 j(step 107e).
[formula 104]
α j = 1 Π m = j + 1 M ( 1 - α m ) β j
In addition, in described transparency J+1Be under 1 the situation, for example, to be made as α j=1 (step 107f).In addition, in described step 107e, ask described transparency jThe time, being not limited to above-mentioned formula 104, also can obtain with other formula.And, though omitted detailed explanation, in described step 107e, in fact also can be with α jBe made as value arbitrarily, so also can be made as the value beyond 1.
Then, whether the processing of differentiating from described step 107d to step 107f proceeds to variable j=1 (step 107g).Here, also do not finish, then return described step 107c and repeated treatments if handle.
Described step 107d will be seen as the subpoint T that overlaps with some A on the image surface from described observer's viewpoint P after the processing of step 107f proceeds to variable j=1 jThe described probability v that exists jBe transformed to transparency jProcessing finish.Then, in the described step of playing up 104, use the hybrid processing of above-mentioned formula 62 and formula 63, and obtain the colouring information D of the some A on any visual point image MThen, if can carry out this processing, then can obtain any visual point image from described observer's viewpoint P to all points (pixel) on described any visual point image.
In addition, under the situation of the image generating method of present embodiment 4-3 too, obtained image can be a kind of in coloured image or the black white image, under the situation of black white image, can use monochrome information (Y) as the information suitable with described colouring information, carry out the processing that illustrates among the embodiment 4-1 as described and obtained after the 3D shape of object, generate virtual visual point image by the process that illustrates among the present embodiment 4-3.
(embodiment 4-4)
Figure 85 to Figure 89 is the synoptic diagram of expression based on the schematic configuration of the video generation device of embodiments of the invention 4-4, Figure 85 and Figure 86 are the block schemes of the structure of indication device, and Figure 87 to Figure 88 is the figure of structure example that the image display system of video generation device has been used in expression.
In Figure 85 and Figure 86, the 2nd, 3-dimensional image creation device, the 201st, the subject image is obtained the unit, the 202nd, observer's viewpoint setup unit, the 203rd, setup units such as projecting plane, the 204th, the texture array is guaranteed the unit, the 205th, colouring information/exist probability determining unit, the 206th, projecting plane information-display surface information conversion unit, the 207th, image output unit, the 208th, rendering unit, the 3rd, image-display units, the 4th, subject image photography unit, the 5th, view information input block.
The video generation device 2 of present embodiment 4-4 is the 3D shapes that obtain object by the process that illustrates among described embodiment 4-1 and the embodiment 4-2, and be created on the two dimensional image that shows on each picture display face of image-display units 3 of the such picture display face of DFD, or the device of the image of the described object of seeing from viewpoint arbitrarily with a plurality of coincidences.At this moment, under the situation of the device that generates the image that is shown by described DFD, for example such shown in Figure 85, have: the subject image is obtained unit 201, and it obtains the different a plurality of subject images of shooting condition; Observer's viewpoint setup unit 202, it sets the image observation person's of observation post's generation viewpoint; Setup units such as projecting plane 203, its setting are used to determine to exist projecting plane, subpoint, the corresponding point of probability; The texture array is guaranteed unit 204, and it guarantees to store the colouring information of the point (subpoint) on the projecting plane and the texture array that has probability; Colouring information/exist probability determining unit 205, it determines the colouring information of described subpoint and the probability (having probability) of existence surface on described subpoint; Projecting plane information-display surface information conversion unit 206, it is with the colouring information of described subpoint and exist the information conversion of probability to be the colouring information of the point on the two dimensional image that shows on the described picture display face and to have probability; And image output unit 207.At this moment, from the image of described image output unit 207 outputs, for example be presented on the image-display units 3 of picture display face that DFD has a plurality of coincidences like that.
In addition, as illustrating among the embodiment 4-3, under the situation of the device of the image that generates the described object see from described any viewpoint, shown in Figure 86, like that, replace described projecting plane information-display surface information conversion unit 206 and have rendering unit 208.And, though omitted diagram, also can be following structure: have described projecting plane information-display surface information conversion unit 206 and described rendering unit 208 both, according to order, generate image by the appointment of some unit from the observer.
In addition, described subject image is obtained the image that the subject (object) of being taken by subject image photography unit 4 is obtained in unit 201.At this moment, described subject image photography unit 4 for example can be the camera unit that is provided with video camera at a plurality of viewpoints place, also can be the camera unit that can take the different image in focal position from a viewpoint.And, at this moment,, for example, can use polarization-type two-value optical system (for example reference literature 12) or zoom lens (for example, reference literature 13) taking under the situation of the different image in focal position from a viewpoint by described subject image photography unit 4.And, in addition, also can switch the different a plurality of camera lenses in focal position at high speed and take.
In addition, described observer's viewpoint setup unit 202 for example uses the information of view information input blocks such as mouse or keyboard 5 input based on the observer, sets distance from described observer's viewpoint to the picture display face of described image-display units 3 etc.And described view information input block 5 also can be posture or the sight line that detects described observer, and imports the unit of the information corresponding with this posture or sight line.
In addition, setup units such as described projecting plane 203, what illustrate among embodiment 4-1 and the embodiment 4-2 as described is such, for example, sets projecting plane L parallel to each other j, subpoint string, corresponding point etc.
In addition, described texture array guarantee that unit 204 illustrates among embodiment 4-1 and the embodiment 4-2 as described like that, for example in device, guarantee to store the subpoint T on the projecting plane on the set storer jColouring information and have probability β jThe zone.
In addition, described colouring information/exist probability determining unit 205 illustrate among embodiment 4-1 and the embodiment 4-2 as described like that, according to described subpoint T jCorresponding point G on the corresponding image determines colouring information, and determines described subpoint T jThe probability β of last existence surface j
In addition, illustrate among the embodiment 4-1 as described like that, described projecting plane information-display surface information conversion unit 207 is with the colouring information on described projecting plane and to have probability transformation be the colouring information and the luminance distribution rate of the point (display dot) on the two dimensional image that shows on each picture display face of described image-display units.And, have under the situation of described rendering unit 208 replacing described projecting plane information-display surface information conversion unit 206, what illustrate among the embodiment 4-3 as described is such, the colouring information of the each point on the image of determining to generate based on the relation between above-mentioned formula 59 or formula 62 and the formula 63.
The video generation device 2 of present embodiment 4-4 for example is created on described DFD by the process that illustrates among described embodiment 4-1 and the embodiment 4-2 and goes up the image that shows.That is, in described 3-dimensional image creation device 2, can not carry out the processing of the accurate 3D shape of obtaining object in the past.Therefore, even there is not the device of high processing power, also can at a high speed and easily be created on described DFD and goes up the image that shows.
In addition, the video generation device 2 of present embodiment 4-4 for example can be realized by computing machine and the program of carrying out in described computing machine.In this case, in described computing machine, carry out put down in writing with described embodiment 4-1 in the program of the suitable order of the processing procedure that illustrates among the processing procedure that illustrates or the described embodiment 4-2 get final product.And at this moment, described program for example can be recorded in magnetic, electrical, optical and learn in the recording medium and provide, and also can utilize the network of internet etc. to provide.
In addition, the image display system of the video generation device 2 of use present embodiment 4-4 is for example considered the structure shown in Figure 87.At this moment, described subject image photography unit 4 can be arranged on the place of observing the space of described image-display units (DFD) 3 near observer User, also So Far Away geographically can be set.Be arranged under the situation of So Far Away on the geography in described subject image photography unit 4, captured image can utilize Network Transmission such as internet to described 3-dimensional image creation device 2.
In addition, the image display system of the video generation device 2 of use present embodiment 4-4 is such shown in Figure 87, not only can be applied to the situation that certain observer User observes certain subject Obj, also can be applied in the two-way communication system such as videophone or video conference.In this case, for example shown in Figure 88,3-dimensional image creation device 2A, 2B, image-display units (DFD) 3A, 3B, subject image photography unit 4A, 4B, referenced viewpoints setup unit 5A, 5B can be set in each residing space of observer UserA, UserB respectively.And, when for example the video generation device 2A, the 2B that are provided with in described each residing space of observer UserA, the UserB network 6 by internet etc. being coupled together, user UserA can observe the three-dimensional image of the observer UserB that generates according to the image of being taken by described subject image photography unit 4B by described image-display units 3A.Similarly, observer UserB can observe the three-dimensional image of the observer UserA that generates according to the image of being taken by described subject image photography unit 4A by described image-display units 3B.
In addition, under the situation that is applied to such intercommunication system, described each image generation unit 2A, 2B needn't be the structure shown in Figure 88, and can be the general communication terminals with the structural unit shown in Figure 86 for some among described video generation device 2A, the 2B also.And, can distribute each structural unit shown in Figure 86 to described video generation device 2A, 2B.
In addition, shown in Figure 88, if network 6 is provided with other video generation device 2C, even then in described observer UserA, the residing space of UserB, described video generation device 2A, 2B are not set, also can utilize video generation device 2C on the described network 6 to obtain three-dimensional image in described image-display units (DFD) 3A, the last objects displayed of 3B.
In addition, in Figure 87 and Figure 88, show the system that described subject image photography unit 4 is provided with a plurality of video cameras, but what illustrate among the embodiment 4-2 as described is such, according to focusing when different images is obtained the 3D shape of object, for example such shown in Figure 89, when generating display image, video camera also can be one.
More than, specifically understand the present invention based on described embodiment, but the invention is not restricted to described embodiment, in the scope that does not break away from its purport, certainly carry out various changes.
For example, in described embodiment 4-1, the method that shows the three-dimensional image of object according to the different image of viewpoint is described, in described embodiment 4-2, illustrated according to focusing to show the method for the three-dimensional image of object, but also these method combinations can have been shown the three-dimensional image of object from different images.In this case, for certain subpoint T j, obtain the degree of correlation according to the corresponding point of the different image of viewpoint, and according to the change of looking from certain viewpoint the corresponding point of image of focal position obtain the local space frequency, their combinations obtained had probability β jLike this, the described probability β that exists jReliability improve, may be displayed on the observer and it seems the image of nature.
(effect of the 4th embodiment)
In the image generating method of the 4th embodiment, when obtaining the 3D shape of described subject, as mentioned above, set a plurality of projecting planes, give the probability (having probability) that has the surface of described subject at described each subpoint for the point (subpoint) on described each projecting plane that is seen as coincidence from described referenced viewpoints.Promptly, not as the existing method that obtains 3D shape, there is object surfaces in the subpoint of consideration the subpoint that is seen as coincidence from described referenced viewpoints, but considers to exist with certain probability on described each subpoint the surface of described subject.And, determining describedly when having probability, to having carried out after the statistical treatment, generate the probability that exists of described each subpoint according to the metewand value after the described statistical treatment according to the degree of correlation of described subpoint or to the metewand value that focal power calculates.At this moment, shown in statistical treatment suppose the probability Distribution Model of described subject, obtain the fitting function of the metewand value of described each subpoint, thereby determine the probability that exists of described each subpoint.
Like this, when described referenced viewpoints is observed certain direction, even under the low situation of the reliability when being positioned at which distance (subpoint) on the surface of estimating subject, still with in fact exist described subject surface apart from respective projection point on, have the surface of described subject with certain probability.Therefore, generate the image of the described subject of seeing from described observer's viewpoint, thereby the discontinuous noise that produces during distance estimations mistake in the conventional method becomes not remarkable based on the 3D shape of the subject that obtains by said process.And, can determine to exist probability by the statistical treatment of carrying out described metewand value, the caused described reliability of probability that exists of The noise that reduces the described image of obtaining is low.
In addition, according to present embodiment, even the such low device of handling property of general popular personal computer also can generate described each two dimensional image at high speed.
[the 5th embodiment]
The 5th embodiment of the present invention then is described.The 5th embodiment mainly is the embodiment corresponding with claim 44~claim 53.In the 5th embodiment, according under mutually different shooting condition, subject being taken a plurality of images that obtain, obtain the 3D shape of described subject, and based on the 3D shape of obtained described subject, the three-dimensional image of subject as described on the display that as DFD, has a plurality of display surfaces, showing.In addition, in the 5th embodiment, do not carry out the processing of the parameter fitting that illustrates in the 4th embodiment.In addition, at the figure that is used for illustrating the 5th embodiment, the part with identical function is marked with same numeral.
When obtaining the 3D shape of described subject, on virtual three dimensions, set the projecting plane of sandwich construction, be seen as the point (subpoint) on a plurality of projecting planes of coincidence for viewpoint, determine the colouring information or the monochrome information of described each subpoint and the probability (having probability) that on described subpoint, has the surface of subject from the observer.Then, when the 3D shape based on obtained described subject is created on the two dimensional image that shows on described a plurality of display surface, point on the two dimensional image of the colouring information that distributes described subpoint or monochrome information is distributed described colouring information or monochrome information and has probability, when on described picture display face, showing, show each point on the two dimensional image with the brightness corresponding with the described height that has a probability.Like this, show the low part of estimating about the surface distance of described subject of reliability faintly, be illustrated in the observer and it seems than natural three-dimensional image.
(embodiment 5-1)
Figure 90 to Figure 100 is the synoptic diagram that is used to illustrate based on the three-dimensional image display method of embodiments of the invention 5-1, Figure 90 is the process flow diagram of an example of the whole processing procedure of expression, Figure 91 and Figure 92 are the figure of an example of the establishing method on expression projecting plane, Figure 93 is the figure of the establishing method of explanation subpoint string, Figure 94 is the process flow diagram of an example of processing procedure that expression is determined the colouring information of subpoint and had the step of probability, Figure 95 to Figure 97 is the figure that there is method of determining probability in explanation, and Figure 98 to Figure 100 is the figure that the generation method of the two dimensional image that shows on each picture display face is described.
The three-dimensional image display method of present embodiment 5-1 for example shown in Figure 90 like that, have: obtain from different viewpoint shot objects and the step 101 of a plurality of images; The step 102 of observer's viewpoint (referenced viewpoints) of the three-dimensional image of setting observation post objects displayed; Obtain the step 103 of the 3D shape of described object according to described a plurality of images; Based on the 3D shape of the described object of in described step 103, obtaining, be created on the step 104 of the two dimensional image that shows on each picture display face; And on described each picture display face, be presented at each two dimensional image that generates in the described step 104, thereby the step 105 of the three-dimensional image of described object is shown.
When showing the three-dimensional image of object at the three-dimensional image display method that uses present embodiment 5-1, on described DFD, at first obtain from different viewpoints take described object and image (step 101).At this moment, the viewpoint of taking the described image of obtaining for example can be arranged in row linearity, also can be arranged in circular-arc two-dimensionally or arbitrarily on the curve, perhaps on plane or the curved surface.And, at this moment, obtained image can be a coloured image, also can be black white image, but in present embodiment 5-1, be assumed to be to obtain and describe by the coloured image that uses the each point (pixel) on red (R), green (G), blue (B) trichromatic colouring information represent images.
After in described step 101, obtaining image, then set and observe the viewpoint (step 102) that described DFD goes up the observer of shown object.At this moment, for described observer's viewpoint, for example set described observer's viewpoint and the relative position relation of described picture display face or the direction of sight line etc. of the distance etc. of the picture display face that becomes benchmark in described a plurality of picture display faces.
In described step 102, set after observer's the viewpoint, then obtained the 3D shape (step 103) of the object that reflects in the described image according to a plurality of images of in described step 101, obtaining.In described step 103, at first set the projecting plane L of sandwich construction j(j=1,2 ..., M) (step 10301).Then, then set the referenced viewpoints R (step 10302) of the 3D shape that is used to obtain described object.At this moment, described projecting plane L jFor example shown in Figure 91 like that, set parallel plane of XY on a plurality of and virtual three dimensions.And, at this moment, described each projecting plane L jFor example shown in Figure 92 like that, be set in Z=0 from the described three dimensions begin negative direction apart from l jOn.And described referenced viewpoints R is the viewpoint that is used to obtain the 3D shape of described object, can be set on the point arbitrarily on the three dimensions.Therefore, described referenced viewpoints R is made as observer's viewpoint of setting in described step 102, for example, and will be from described Z=0 projecting plane L farthest 1Be in the most inboard picture display face as being seen as of described DFD from the observer, such shown in Figure 92, from described projecting plane L 1Distance setting for from described observer's viewpoint to the most inboard picture display face that is in described DFD apart from l d
In described step 10301 and step 10302, set described projecting plane L jAnd after the referenced viewpoints R, then set subpoint on the described projecting plane and the point (steps 10303) such as (corresponding point) on the described image of obtaining corresponding with described each subpoint.At this moment, described subpoint is for example such shown in Figure 93, is set at from described referenced viewpoints R and draws straight line, described each straight line and described each projecting plane L to a plurality of directions jBetween intersection point.And, since the surface of estimating described subject apart from the time, for a plurality of subpoint T on the described same straight line jEstimate to be present on which subpoint, thus such shown in Figure 93, with the subpoint T on the same straight line jS handles together as the subpoint string.
In addition, described corresponding point like that, are from described subpoint T shown in Figure 91 and Figure 92 jSee the viewpoint C of described each video camera iThe time, the some G on the image surface of described each video camera that overlaps with the viewpoint of described video camera IjAt this moment, shown in Figure 91, if on described each image surface, set the coordinate system (xy coordinate system) of two dimension, then with described subpoint T j(X j, Y j, Z j) corresponding corresponding point G IjTwo-dimensional coordinate (x Ij, y Ij) can pass through described subpoint T jProject on the point of the two dimension on each image surface and obtain.This projection can use that general ((x, the projection matrix of 3 row, 4 row y) carries out Z) to project to point on the two dimensional surface for X, Y with the point on the three dimensions.The described virtual interior described corresponding point G of three dimensions IjCoordinate (x Ij, y Ij) and described digital picture coordinate (u, v) the relation between is the same with form illustrated in other the embodiment.
In described step 10303, set described corresponding point G IjDigital picture coordinate (u Ij, v Ij) and described subpoint T jThree dimensional space coordinate (X j, Y j, Z j) between corresponding relation.This corresponding relation can be at all (u Ij, v Ij) with (X j, Y j, Z j) be set at table, also can be only to representational (u Ij, v Ij) setting (X j, Y j, Z j) value, remaining point for example can be obtained by the interpolation processing of linear interpolation etc.
In addition, in described digital picture coordinate system, (u v) quantizes, but in the following description, successive value is then got in short of special explanation, carries out suitable discretize and handle when the described two-dimensional array of visit.
In described step 10303, determined then to guarantee to store described projecting plane L after described subpoint string, the corresponding point etc. jInformation, promptly at described projecting plane L jOn carry out the array (step 10304) of the image of texture.At this moment, the array of being guaranteed for each pixel for example have 8 colouring information (R, G, B) and have a probabilistic information, as with described subpoint T jThe texture information of position correspondence.
In described step 10304, guaranteed to store after the array of information on described projecting plane, then determined each subpoint T jColouring information and have probability (step 10305).In described step 10305, for example shown in Figure 94 like that, carry out determining each subpoint T on certain subpoint string repeatedly at all subpoint strings that set jColouring information and have the dual circular treatment of the processing of probability.Therefore, the described subpoint string of initialization (step 10305a) at first.Then, the subpoint T on the described subpoint string of initialization j, for example be made as j=1 (step 10305b).
Then, determine described subpoint T jColouring information (step 10305c).In described step 10305c, each corresponding point G that will in described step 10303, set for example iColouring information K iMean value be defined as described subpoint T jColouring information K j
Then, obtain and described subpoint T jEach corresponding corresponding point G IjThe degree of correlation Q of the point on the object that is reflected on (i ∈ I) j(step 10305d).At this moment, for example, will represent described subpoint T jThe vector of colouring information be made as K j, will represent each corresponding point G IjThe vector of colouring information be made as K IjThe time, the same with the 4th embodiment, described degree of correlation Q jObtain by following formula 105.
[formula 105]
Q j = Σ i ∈ I ( K j - K ij ) 2
State formula 105 in the use and obtain degree of correlation Q jSituation under, described degree of correlation Q jAlways get on the occasion of, and relevant high more then value is more little.
In addition, above-mentioned formula 105 is described degree of correlation Q jAn example of the method for obtaining, also can use the formula beyond the above-mentioned formula 105 to obtain described degree of correlation Q jAnd, obtaining described degree of correlation Q jThe time, not only can consider described subpoint T jWith described corresponding point G Ija bit obtain, also can consider to comprise described subpoint T jAnd described corresponding point G IjNear a plurality ofly obtain in interior zonule.
Obtain described degree of correlation Q by described step 10305d jAfterwards, upgrade described subpoint T j, confirm whether carried out the processing (step 10305e) of described step 10305c and step 10305d for all subpoints on the subpoint string that becomes process object.Here, if there is the subpoint of the processing of not carrying out described step 10305c and step 10305d, then return described step 10305c to obtain described colouring information K jAnd degree of correlation Q j
If at having obtained described colouring information and degree of correlation Q as all subpoints on the subpoint string of process object j, then such shown in Figure 95, for each the subpoint T on the subpoint string jGiven colouring information K jAnd degree of correlation Q jAt this moment, more described each subpoint T jDegree of correlation Q j, general such shown in Figure 96 (a), certain subpoint T is only arranged mDegree of correlation Q mGet distinctive little value.Under these circumstances, on this subpoint string, can be estimated as described object surfaces and be in described subpoint T mOn, its reliability is also higher.
But, as illustrating in the embodiment before this, according to the pattern (texture) on the shape of object or surface or shooting condition etc., each the subpoint T on compared projections point is gone here and there jDegree of correlation Q jThe time, shown in Figure 96 (b), like that, do not exist the degree of correlation to have the subpoint of distinctive little value sometimes.Under these circumstances, be on a certain subpoint even be estimated as described object surfaces, its reliability is also lower, and the situation of misjudgment is arranged.And under the situation of misjudgment, this influence shows as big noise on the image that is generated.
Therefore, in three-dimensional image display method of the present invention, then determine each the subpoint T on described subpoint string jProbability (the having probability) β of existence surface, place j(step 10305f).At this moment, the described probability β that exists jNeed satisfy following formula 106 and formula 107.
[formula 106]
0≤β j≤1
[formula 107]
Σ j = 1 M β j = 1
And, be made as at subpoint T jThe high more then described probability β that exists of the probability of last existence surface jWhen getting more near 1 value, can be at each the subpoint T on the described subpoint string jThe degree of correlation Q that obtains j, for example carry out determining the described probability β that exists by the conversion process of following formula 108 and formula 109 expressions j(j ∈ J).
[formula 108]
β ~ j = 1 Q j
[formula 109]
β j = β j Σ j = 1 M β ~ j
In addition, basically, the described probability β that exists jAs long as satisfy the condition of above-mentioned formula 106 and formula 107.Therefore, described conversion process also can use above-mentioned formula 108 and formula 109 formula in addition to determine.
Determining described each subpoint T by described step 10305f jHave a probability β jAfterwards, with described each subpoint T jColouring information K jAnd there is a probability β jBe stored in the zone of guaranteeing in the described step 10304.
Storing described each subpoint T jColouring information K jAnd there is a probability β jAfterwards, upgrade described subpoint string, confirm whether carried out the processing (step 10305g) of described step 10305c to step 10305f for all subpoint strings of determining in the described step 10303.Here, do not carry out the subpoint of described step 10305c, then return described step 10305b, repeat processing from described step 10305c to step 10305f to the processing of step 10305f if exist.
Like this, if carried out processing from described step 10305c to step 10305f at determined all subpoint strings in the described step 10303, the processing of then described step 10305 finishes, and obtains the 3D shape of described object.
When having carried out the processing of described step 103, for example such shown in Figure 97, each the subpoint T on certain subpoint string j(j=1,2 ..., maintain colouring information K in texture array M) jAnd there is a probability β jThat is, the 3D shape of the object of obtaining by three-dimensional image display method of the present invention be not as existing method as described in have object surfaces on a certain subpoint on the subpoint string, but on described each subpoint, exist.In three-dimensional image display method of the present invention, use the 3D shape of such object to generate the two dimensional image (step 104) that on described a plurality of picture display faces, shows respectively.
In described step 104, during the two dimensional image that on generating described each picture display face, shows, set on described virtual three dimensions at first that observer's viewpoint, a plurality of two dimensional image generate face, the 3D shape of the described object obtained in described step 103.At this moment, for example such shown in Figure 98, described two dimensional image generates face LD n(n=1,2 ..., N) be set to be seen as at depth direction and overlap from described observer's viewpoint P.And, generate face LD to described each two dimensional image from described observer's viewpoint P nApart from ld nBe set at the distance of in described step 102, setting.And, at this moment, if the projecting plane L of the described object dimensional shape of hypothesis performance jNumber and set at interval and described two dimensional image generates face LD nNumber and set consistent at interval, then the 3D shape of described object for example shown in Figure 98 like that, be set at described projecting plane L jGenerate face LD with described two dimensional image nConsistent.At this moment, generate face LD if suppose described two dimensional image nBe the face that generates image shown on each picture display face of intensification modulation type DFD, the described two dimensional image that then need be seen as coincidence for the viewpoint P from the observer generates face LD nOn each point (display dot) A nDetermine colouring information KD nAnd there is probability (luminance distribution coefficient) γ nHere, shown in Figure 98, if show the projecting plane L of described object dimensional shape jGenerate face LD with two dimensional image nFormed unanimity, then described each display dot A nColouring information KD nBecome and described each display dot A nResiding two dimensional image generates face LD nThe projecting plane L that overlaps jSubpoint T jColouring information K jAnd, about described each display dot A nLuminance distribution rate γ n, distribute and described each display dot A nResiding two dimensional image generates face LD nThe projecting plane L that overlaps jSubpoint T jHave a probability β jLike this, generating face LD for described two dimensional image nOn each display dot A determined after colouring information KD and the luminance distribution coefficient gamma that output generates face LD at this two dimensional image nThe image of last generation, and on the picture display face of the DFD of reality, show (step 105).
But, show the projecting plane L of described object dimensional shape jNumber and set at interval and needn't generate face LD with described two dimensional image nNumber and set consistent at interval.Therefore, projecting plane L then is described jNumber and set at interval and described two dimensional image generates face LD nNumber and set described two dimensional image generation method under the inconsistent situation at interval.
At this moment, if be seen as distance that the most forward projecting plane generates face to the most inboard two dimensional image to the distance on the most inboard projecting plane and the face that generates from the most forward two dimensional image about equally from described observer's viewpoint P, then show the projecting plane L of described object dimensional shape jFor example such shown in Figure 99, be set at from described observer's viewpoint P and be seen as the most inboard projecting plane L 1Generate face LD with two dimensional image 1Overlap.Like this, the viewpoint P from described observer is seen as the most inboard two dimensional image generation face LD 1Colouring information KD that respectively shows A and luminance distribution coefficient gamma be that viewpoint P from described observer is seen as the most inboard projecting plane L 1On each subpoint T colouring information K and have probability β.
In addition, generate colouring information KD and the luminance distribution coefficient gamma of each the display dot A on the face LD, determine by following method about the two dimensional image on the projecting plane that do not have coincidence.
Generate colouring information KD and the luminance distribution coefficient gamma of each the display dot A on the face LD about the two dimensional image on the projecting plane that do not have described coincidence, for example, will be seen as from described observer's viewpoint P with described each projecting plane L that described each display dot A overlaps on subpoint T colouring information K and exist probability β to distribute to be seen as nearest two dimensional image and generate display dot A on the face LD from described projecting plane L.At this moment, the colouring information KD of described display dot A becomes the mean value of the colouring information K of each the subpoint T that is distributed, perhaps generates the colouring information K that face LD is seen as the subpoint T of nearest projecting plane L from the residing two dimensional image of described display dot A.And, about the luminance distribution coefficient gamma, be made as described distribution each subpoint T have a probability β sum.At this moment, if certain two dimensional image is generated face LD nBecome the projecting plane L of nearest generation face jSet be made as { L j| j ∈ Γ n, then described two dimensional image generates face LD nOn display dot A nLuminance distribution rate γ nUse described each projecting plane L jSubpoint T jHave a probability β jProvide by following formula 110.
[formula 110]
γ h = Σ j ∈ Γ n β j
Here, consider described projecting plane L jGenerate face LD with two dimensional image nSituation for the relation of the position shown in Figure 100 (a).Here, if will be seen as from observer's viewpoint P and display dot A 1, A 2Each the subpoint T that overlaps jThe colouring information K of (j=1,2,3,4,5) jAnd there is a probability β jDistribute to the display dot A that is seen as from described each projecting plane on the nearest two dimensional image generation face, then subpoint T 1, T 2, T 3Colouring information and exist probability to be assigned to described display dot A 1At this moment, described display dot A 1Colouring information KD 1For example can be described each subpoint T 1, T 2, T 3Colouring information K 1, K 2, K 3Mean value, also can be from described display dot A 1Be seen as nearest subpoint T 2Colouring information K 2In addition, described display dot A 1The luminance distribution coefficient gamma 1Use above-mentioned formula 110 to be made as described each subpoint T 1, T 2, T 3Have a probability β 1, β 2, β 3Sum.
Similarly, described subpoint T 4, T 5Colouring information and exist probability to be assigned to two dimensional image to generate face LD 2, display dot A 2Colouring information KD 2Be made as described subpoint T 4, T 5Colouring information K 4, K 5Mean value or subpoint T 5Colouring information K 5And, about the luminance distribution coefficient gamma 2, use above-mentioned formula 110 to be made as described each subpoint T 4, T 5Have a probability β 4, β 5Sum.
In addition, described two dimensional image generates face LD nBe provided with at interval and described projecting plane L jSetting at interval different, two continuous two dimensional images generate face LD n, LD N+1Between projecting plane L jSubpoint colouring information and exist the probability also can be according to described projecting plane L jGenerate face LD with each two dimensional image n, LD N+1Distance ratio and distribute.At this moment, if described two dimensional image is generated face LD n, LD N+1Between projecting plane L jSet be made as { L j| j ∈ Γ n, then described two dimensional image generates face LD nOn display dot A nLuminance distribution rate γ nUse described each subpoint T jHave a probability β jProvide by following formula 111.
[formula 111]
γ h = Σ j ∈ Γ n w j , n β j
In above-mentioned formula 111, w J, nBe expression projecting plane L jGenerate face LD for two dimensional image nThe coefficient of percentage contribution.
Here, for example shown in Figure 100 (b), consider to generate face LD at two two dimensional images 1, LD 2Between set projecting plane L 1, L 2Situation.At this moment, if suppose from projecting plane L 1To described each display surface LD 1, LD 2Distance be respectively B 1, B 2, then described projecting plane L 1Described each two dimensional image is generated face LD 1, LD 2Percentage contribution w 1,1, w 1,2For example provide by following formula 112.
[formula 112]
w 1,1 = B 2 B 1 + B 2 , w 1,2 = B 1 B 1 + B 2
Similarly, if projecting plane L 2Generate face LD with described each two dimensional image 1, LD 2Between distance be respectively B 3, B 4, then described projecting plane L 2Generate face LD for described each two dimensional image 1, LD 2Percentage contribution W 2,1, W 2,2Provide by following formula 113.
[formula 113]
w 2,1 = B 4 B 3 + B 4 , w 2,2 = B 3 B 3 + B 4
Consequently, described two dimensional image generates face LD 1Display dot A 1Luminance distribution rate γ 1And described display surface LD 2Display dot A 2Luminance distribution rate γ 2Respectively shown in following formula 114.
[formula 114]
γ 1=w 1,1β 1+w 2,1β 2,γ 2=w 1,2β 1+w 2,2β 2
Like this, when obtaining the 3D shape of described object, if according to each the subpoint T on the described subpoint string jDegree of correlation Q jObtain and given at described each subpoint T jProbability (the having probability) β of last existence surface jShape, and by the described probability β that exists jThe luminance distribution coefficient that provides described two dimensional image to generate the display dot A on the face LD, then each the subpoint T on described subpoint string jIn do not have degree of correlation Q with distinctive value jSubpoint, and under the low situation of the reliability of the distance estimations of body surface, on this subpoint string, described object surfaces shows on a plurality of projecting planes faintly.And, if according to described each subpoint T jThe probability β that exists determine that described two dimensional image generates the luminance distribution coefficient gamma of the point on the face LD, then on the picture display face of reality, show when thereby the two dimensional image that is generated on the described two dimensional image generation face illustrates the three-dimensional image of object, reliability for the estimation of distance is low, and the described probability β that exists is dispersed on the subpoint string on a plurality of subpoints, shows described object surfaces faintly.Therefore, the noise that described DFD goes up on the shown three-dimensional image is not remarkable, may be displayed on the observer and it seems than natural image.
As discussed above, according to the three-dimensional image display method of present embodiment 5-1,, also may be displayed on the observer and it seems the 3-D view of nature even do not ask the 3D shape accurately of shown object.
In addition, in the method for displaying image of present embodiment 5-1, to obtain by the coloured image that uses the point (pixel) on red (R), green (G), blue (B) trichromatic colouring information represent images, and the situation that forms the 3D shape of described object is that example is illustrated, but in the method for displaying image of present embodiment 5-1, be not limited to described coloured image, also can obtain and use brightness (Y), aberration (U, V) black white image of the each point on the represent images (pixel), and obtain the 3D shape of described object.At obtained image is under the situation of described black white image, can use described monochrome information (Y) as the information suitable with described colouring information, obtains 3D shape by the process that illustrates among the present embodiment 5-1, and generates described two dimensional image.
(embodiment 5-2)
Figure 101 to Figure 104 is the synoptic diagram that is used to illustrate based on the three-dimensional image display method of the bright embodiment 5-2 of this law, Figure 101 is the figure of the relation between expression subpoint and the corresponding point, Figure 102 is the process flow diagram of an example of the colouring information of the definite subpoint of expression and the step that has probability, and Figure 103 and Figure 104 are the figure that there is the method for obtaining of probability in explanation.
The basic procedure of the three-dimensional image display method of present embodiment 5-2 is the same with the three-dimensional image display method of described embodiment 5-1, carry out shown in Figure 90 from as described in step 101 to the processing of step 105.In the three-dimensional image display method of present embodiment 5-2, be with the difference of the three-dimensional image display method of described embodiment 5-1, in described step 101, replace the different a plurality of images of viewpoint, and obtain focusing from different a plurality of images, in described step 103, use described focusing to obtain the 3D shape of described object from different images.
When the three-dimensional image display method that uses present embodiment 5-2 shows the three-dimensional image of object on described DFD, at first, obtain from certain viewpoint by changing focusing from a plurality of images of taking.At this moment, for example use the described a plurality of images of the first-class shooting of polarization-type two-value optical system or varifocal mirror.And obtained image can be the same with described embodiment 5-1, is coloured image, also can be black white image.Then, illustrate among the embodiment 5-1 as described like that, in the viewpoint (step 102) of having set the observer afterwards, obtain the processing of step 103 of the 3D shape of described object.
In the processing of described step 103, what illustrate among the embodiment 5-1 as described is such, at first sets described projecting plane L j(j=1,2 ..., M) and referenced viewpoints R (step 10301, step 10302).Then, set described subpoint string and corresponding point, guarantee to store the array (zone) (step 10303, step 10304) of the information on described projecting plane.
As the three-dimensional image display method of present embodiment 5-2, using focusing under the situation of the three-dimensional image of different a plurality of images demonstration objects, in described step 10301, set projecting plane L jThe time, for example shown in Figure 101 like that, the focusing that is set at the image of taking from the distance of the viewpoint C of described video camera and described video camera is from f i(i=1,2 ..., N) unanimity.And, in described step 10303, with described subpoint T jCorresponding corresponding point G iBe made as from the viewpoint C of described video camera and see described subpoint T jThe time and described subpoint T jThe image I mg that overlaps iOn the point.In addition, about the establishing method of subpoint string, described subpoint T jCoordinate and corresponding point G iThe correspondence of digital picture coordinate set up because the same, so omit detailed explanation with the method that illustrates among the described embodiment 5-1.
In addition, the processing in zone that described step 10304 is preserved the information on storage projecting plane is really also carried out the processing the same with the processing that illustrates among the described embodiment 5-1 and is got final product, so omit detailed explanation.
Then, use described a plurality of images of obtaining to determine described each subpoint T jColouring information and have probabilistic information (step 10305).In the three-dimensional image display method of present embodiment 5-2 too, in described step 10305, for example shown in Figure 102 like that, carry out determining each subpoint T on certain subpoint string repeatedly at all subpoint strings that set jColouring information and the dual circular treatment that has the processing of probability.Therefore, the described subpoint string of initialization (step 10305a) at first.Then, the subpoint T on the described subpoint string of initialization j, for example be made as j=1 (step 10305b).
Then, determine described subpoint T jColouring information (step 10305c).In described step 10305c, each corresponding point G that will in described step 10303, set for example iThe mean value of colouring information be defined as described subpoint T jColouring information K j
Then, according to described subpoint T jEach corresponding corresponding point G iOn the focusing degree (to focal power) of point on the object that reflected obtain described subpoint T jTo focal power Q j(step 10305h).Clear or fuzzy degree according to point on the image or the image in the tiny area is determined focal power.Described computing method to focal power have the whole bag of tricks based on Depth from Focus theory or Depth fromDefocus theory.At this moment, for example by more described each corresponding point G iThe size of local space frequency obtain described to focal power Q j
Described Depth from Focus theory or Depth from Defocus theory are to analyze the method for focusing from different a plurality of images and the described object surfaces shape of instrumentation.At this moment, for example, can be estimated as with change described focusing from and there is described object surfaces in the focusing of the image that the local space frequency is the highest in the image taken on corresponding distance.Therefore, for example use local space frequency evaluation function as 115 expressions of following formula calculate as described in subpoint T jTo focal power Q j
[formula 115]
Q = 1 D Σ x = x i x f Σ y = y i y f { Σ p = - L c L c Σ q = - L r L r | f ( x , y ) - f ( x + p , y + q ) | }
Here, f is the deep or light value of pixel, and D is the constant that is used to carry out normalization, is all pixel counts of estimating, (Lc ,-Lr)-(Lc, Lr) and (xi, yi)-(xf yf) is the zonule that is used to disperse to estimate with smoothing respectively.
In addition, above-mentioned formula 115 is described to focal power Q jAn example of the method for obtaining, also can use the formula beyond the above-mentioned formula 115 to obtain described to focal power Q j
Obtain by described step 10305h described to focal power Q jAfterwards, upgrade described subpoint T j, confirm whether carried out the processing (step 10305e) of described step 10305c and step 10305h for all subpoints on the subpoint string that becomes process object.Here, if there is the subpoint of the processing of not carrying out described step 10305c and step 10305h, then return described step 10305c to obtain described colouring information K jAnd to focal power Q j
If at having obtained described colouring information as all subpoints on the subpoint string of process object and to focal power Q j, then such shown in Figure 103, for each the subpoint T on the subpoint string jGiven colouring information K jAnd to focal power Q jAt this moment, described each subpoint T jTo focal power Q jBe with described embodiment 5-1 in the suitable degree of the degree of correlation used when determining to have probability β, according to the pattern (texture) on the shape of object and surface or shooting condition etc., each the subpoint T on compared projections point is gone here and there jTo focal power Q jThe time, there is not the subpoint that focal power is had distinctive little value sometimes.Under these circumstances, be on a certain subpoint even be estimated as described object surfaces, its reliability is also lower, and the situation of misjudgment is arranged.And under the situation of misjudgment, this influence shows as big noise on the image that is generated.
Therefore, in three-dimensional image display method of the present invention, then determine each the subpoint T on described subpoint string jProbability (the having probability) β of last existence surface j(step 10305f).At this moment, the described probability β that exists jNeed satisfy above-mentioned formula 106 and formula 107.Therefore, in present embodiment 5-2, for example use following formula 116 to determine subpoint T kHave a probability β k
[formula 116]
β k = Q k Σ j = 1 M Q j
In addition, basically, the described probability β that exists jThe condition that satisfies above-mentioned formula 106 and formula 107 gets final product.Therefore, the described probability that exists also can use above-mentioned formula 116 formula in addition to determine.
Shown in Figure 104, determined described each subpoint T by described step 10305f jHave a probability β jAfterwards, with described each subpoint T jColouring information K jAnd there is a probability β jBe stored in the zone of guaranteeing in the described step 10304.
Storing described each subpoint T jColouring information K jAnd there is a probability β jAfterwards, upgrade described subpoint string, and confirm whether carried out the processing (step 10305g) of described step 10305c to step 10305f for all subpoint strings of determining in the step 10303.Here, do not carry out the subpoint string of described step 10305c, then return described step 10305b, repeat processing from described step 10305c to step 10305f to the processing of step 10305f if exist.
Like this, if carried out processing from described step 10305c to step 10305f for all subpoint strings of determining in the described step 10303, the processing of then described step 10305 finishes, and obtains the 3D shape of described object.Then, after the processing by described step 103 has obtained the 3D shape of described object, if by with the same process of described embodiment 5-1, determine that based on the 3D shape of the described object of obtaining described two dimensional image generates colouring information and the luminance distribution coefficient gamma of the display dot A on the face LD, be created on the two dimensional image (step 104) that shows on the picture display face of the such a plurality of coincidences of DFD, and on the picture display face of reality, show the image (step 105) that is generated, the three-dimensional image of described object then can be shown.
The three-dimensional image display method of the present embodiment 5-2 also three-dimensional image display method with described embodiment 5-1 is the same, in the 3D shape of obtained described object, and each the subpoint T on described subpoint string jIn do not exist have distinctive value to focal power Q jSubpoint and under the low situation of the reliability of the estimation of body surface distance, on this subpoint string, described object surfaces shows on a plurality of projecting planes faintly.And, if according to described each subpoint T jThe probability β that exists determine that described two dimensional image generates the luminance distribution coefficient gamma of the point on the face LD, then on the picture display face of reality, show when thereby the two dimensional image that generates on the described two dimensional image generation face illustrates the three-dimensional image of object, reliability for distance estimations is low, and the described probability β that exists is dispersed on the subpoint string on a plurality of subpoints, shows described object surfaces faintly.Therefore, the noise that described DFD goes up on the shown three-dimensional image is not remarkable, may be displayed on the observer and it seems than natural picture.
As discussed above, the same according to the three-dimensional image display method of present embodiment 5-2 with described embodiment 5-1, even do not ask the 3D shape accurately of object, also can show the 3D shape that looks natural.
In addition, under the situation of the image generating method of present embodiment 5-2, obtained image also can be a kind of in coloured image, the black white image, under the situation of black white image, can use monochrome information (Y) as the information suitable, carry out the processing that illustrates among the present embodiment 5-2 with described colouring information.
(embodiment 5-3)
In the present embodiment, also can constitute with the 4th embodiment in Figure 85 shown in the same 3-dimensional image creation device of structure.And, can constitute with the 4th embodiment in Figure 87~Figure 89 shown in the same image display system of structure.But the processing that device is carried out is corresponding to embodiment 5-1,5-2.
More than, specifically understand the present invention based on described embodiment, but the invention is not restricted to described embodiment, in the scope that does not break away from its purport, certainly carry out various changes.
For example, in described embodiment 5-1, the method that shows the three-dimensional image of object according to the different image of viewpoint has been described, in described embodiment 5-2, illustrated according to the different image in focal position to show the method for three-dimensional image, but also can make up the three-dimensional image that these methods show object.In this case, for certain subpoint T j, obtain the degree of correlation according to the corresponding point of the different image of viewpoint, and obtain the local space frequency, and make up them and obtain and have probability β according to the corresponding point that changed the image of focal position from certain viewpoint jLike this, the described probability β that exists jReliability improve, may be displayed on the observer and it seems more natural image.
(effect of the 5th embodiment)
In the three-dimensional image display method of the 5th embodiment, when seeing certain direction from referenced viewpoints, even under the low situation of the reliability when being positioned at which distance (subpoint) on the surface of estimating subject, also with the surface that in fact has described subject apart from respective projection point on, have the surface of described subject with certain probability.Therefore, by show the each point on the described picture display face with the brightness corresponding with the described height that has a probability, the discontinuous noise that produces when the misjudgment of distance in existing method becomes not remarkable.In addition, even the such low device of handling property of general popular personal computer also can generate described each two dimensional image at high speed.
The invention is not restricted to the respective embodiments described above, can carry out various changes, application within the scope of the claims.

Claims (43)

1. a virtual visual point image generating method has: the step that obtains a plurality of subject images that photographed by a plurality of video cameras; Determine step as the virtual view of the position of observing described subject; And, generate step as the virtual visual point image of the image when described virtual view is seen subject based on the described subject image of obtaining, it is characterized in that,
The step that generates described virtual visual point image has:
Step 1 is set the projecting plane with sandwich construction;
Step 2 is obtained the corresponding point on described each subject image corresponding with each subpoint on the described projecting plane;
Step 3 based on the colouring information or the monochrome information of a plurality of corresponding point, is determined the colouring information or the monochrome information of described subpoint;
Step 4, with the projecting plane under the described subpoint the viewpoint of intrinsic video camera as referenced viewpoints, be seen as a plurality of subpoints of coincidence for the described referenced viewpoints from the space, based on the degree of correlation of described corresponding point or its near zone, calculate with the corresponding distance in the position of described each subpoint on have the degree of the possibility of described subject;
Step 5, the colouring information of the reference point that is seen as coincidence from described virtual view or monochrome information carried out the corresponding hybrid processing of degree of the possibility that exists with described subject, thereby determine each color of pixel information or monochrome information in the described virtual visual point image; And
Step 6, for the corresponding all points of the pixel of described virtual visual point image, repeat described step 1 to step 5.
2. virtual visual point image generating method as claimed in claim 1 is characterized in that,
Described step 3
The colouring information or the monochrome information of mixing described a plurality of corresponding point are perhaps selected the colouring information or the monochrome information of corresponding point from the colouring information of described a plurality of corresponding point or monochrome information.
3. virtual visual point image generating method as claimed in claim 1 or 2 is characterized in that,
Carry out conversion and each reference point on the described projecting plane is set had from penetrating into the step of a plurality of other transparencies of level that do not see through thereby described step 4 or described step 5 have the degree of the possibility that described subject is existed,
Described step 5 is carried out the hybrid processing corresponding with described transparency, and replaces the corresponding hybrid processing of degree with the possibility of described subject existence.
4. virtual visual point image generating method as claimed in claim 3 is characterized in that,
The hybrid processing of described step 5 is from handling successively towards near subpoint from described virtual view subpoint far away,
Colouring information that obtains in the hybrid processing till each described subpoint or monochrome information are to obtain by dividing in carrying out with the colouring information that obtains in the hybrid processing of the ratio corresponding with described transparency till to the colouring information of this subpoint or monochrome information and the subpoint before it or monochrome information.
5. virtual visual point image generating method as claimed in claim 1 is characterized in that,
Described step 1 set each video camera of taking described each subject image intrinsic projecting plane,
Only use the colouring information of corresponding point of the subject image that photographs by described a plurality of video cameras or colouring information or the monochrome information that monochrome information is determined the described subpoint of described step 3,
Calculate the degree of the possibility that the described subject of described step 4 exists,
Concern to come according to the position between described virtual view and described each referenced viewpoints the colouring information of the described reference point of described step 5 or the hybrid processing of monochrome information are proofreaied and correct.
6. virtual visual point image generating apparatus, have: the subject image of obtaining a plurality of subject images that photographed by a plurality of video cameras is obtained the unit; Determine virtual view determining unit as the virtual view of the position of observing described subject; And, generate image generation unit as the virtual visual point image of the image when described virtual view is seen subject based on the described subject image of obtaining, it is characterized in that,
Described image generation unit has:
The projecting plane determining unit, it determines to have the projecting plane of sandwich construction;
The referenced viewpoints determining unit, it determines the position of referenced viewpoints;
The texture array is guaranteed the unit, and it guarantees to attach to the array of the texture image on the described projecting plane;
The corresponding point matching processing unit, its correspondence of part of having taken the same area of described subject between described a plurality of subject images is set up;
The colouring information determining unit, it carries out hybrid processing to described a plurality of subject images, thus colouring information or monochrome information in the array of definite described texture image;
There is the probabilistic information determining unit, it is based on the result of described corresponding point matching processing unit, determine in the array of described texture image, as with described projecting plane on the corresponding distance in position of each subpoint on exist described subject possibility degree have a probabilistic information; And
Rendering unit, there is the probabilistic information that exists that the probabilistic information determining unit determines in it based on the colouring information of being determined by described colouring information determining unit or monochrome information and by described, and the described projecting plane of seeing from described virtual view is played up.
7. virtual visual point image generating apparatus as claimed in claim 6 is characterized in that,
Describedly exist the probabilistic information determining unit to have the described probabilistic information that exists is carried out conversion, have from penetrating into the unit of a plurality of other transparencies of level that do not see through thereby each reference point on the described projecting plane set,
Described rendering unit replaces the degree of the possibility that described subject exists and uses described transparency to play up.
8. virtual visual point image generating apparatus as claimed in claim 7 is characterized in that,
Rendering unit have from from described virtual view subpoint far away towards the unit that near subpoint is handled successively,
Colouring information that obtains in the hybrid processing till each described subpoint or monochrome information are to obtain by dividing in carrying out with the colouring information that obtains in the hybrid processing of the ratio corresponding with described transparency till to the colouring information of this subpoint or monochrome information and the subpoint before it or monochrome information.
9. as any described virtual visual point image generating apparatus in the claim 6 to 8, it is characterized in that,
Described projecting plane determining unit determine to take described each subject image each video camera intrinsic projecting plane,
Described colouring information determining unit only uses the colouring information or the monochrome information of the corresponding point of the subject image that is photographed by described a plurality of video cameras to determine,
Described exist the probabilistic information determining unit with the projecting plane under the described subpoint the viewpoint of intrinsic video camera calculate as referenced viewpoints,
Described rendering unit has according to the position between described virtual view and described each referenced viewpoints and concerns the unit of proofreading and correct.
10. image generating method has: obtain from a plurality of different viewpoints and subject is taken and the step of the image that obtains; Obtain the step of the 3D shape of described subject according to described a plurality of images; And generate the step of the image of the described subject of seeing from observer's viewpoint based on the 3D shape of obtained described subject, it is characterized in that,
The step that obtains the 3D shape of described subject has: the step of setting the projecting plane of sandwich construction on virtual three dimensions; Be identified for obtaining the step of referenced viewpoints of the 3D shape of described subject; According to colouring information or monochrome information as the corresponding point on the corresponding described image of obtaining of the subpoint of the point on the described projecting plane, determine the step of the colouring information of described subpoint; Calculate the step of the degree of correlation between the corresponding point corresponding with described subpoint; And,, determine the step that has probability as the probability of existence surface on described each subpoint based on the degree of correlation of described each subpoint for a plurality of subpoints that are seen as coincidence from described referenced viewpoints,
The step of calculating the described degree of correlation has: prepare the step of many groups as the shooting unit of the combination of several viewpoints of selecting from described a plurality of viewpoints; And obtain the step of the degree of correlation according to the corresponding point on the image that comprises in the described unit of respectively making a video recording,
Determine that the described step of probability that exists has: calculate the step that has probability based on the degree of correlation of described each subpoint of obtaining at each described shooting unit; And carrying out, thereby the step that has probability of definite described each subpoint at the definite overall treatment that has probability of each described shooting unit.
11. image generating method as claimed in claim 10 is characterized in that,
Calculating has based on the step that has probability of the degree of correlation of described each subpoint of obtaining at each described shooting unit: the step of calculating the metewand value according to the degree of correlation of described each subpoint of calculating at each described video camera batch total; Carry out described each subpoint of calculating at each described video camera batch total the metewand value statistical treatment and calculate the step of the distribution function that has probability; And the step that has probability of determining described each subpoint based on the described distribution function that has a probability.
12. as claim 10 or 11 described image generating methods, it is characterized in that,
The step of the image of the described subject that generation is seen from described observer's viewpoint according to the described corresponding ratio of height that has probability, mixing is seen as the colouring information or the monochrome information of each subpoint of coincidence from described observer's viewpoint, thereby the colouring information or the monochrome information of the point on the image of determining to be generated, and generate a two dimensional image.
13. as claim 10 or 11 described image generating methods, it is characterized in that,
The step of the image of the described subject that generation is seen from described observer's viewpoint has: be seen as the different position of the degree of depth in the viewpoint from described observer and set the step that a plurality of images generate faces; And be seen as described each subpoint of coincidence and the relation of the position between the point on described each image generation face based on viewpoint from described observer, with the colouring information or the monochrome information of each subpoint and to have probability transformation be the colouring information on described each image generation face or the step of monochrome information and luminance distribution coefficient.
14. a video generation device has: obtain from a plurality of different viewpoints and subject is taken and the subject image of the image that obtains is obtained the unit; The 3D shape that obtains the 3D shape of described subject according to described a plurality of images obtains the unit; And, generate the subject image generation unit of the image of the described subject of seeing from observer's viewpoint based on the 3D shape of obtained described subject, it is characterized in that,
Described 3D shape obtains the unit to be had: the unit of setting the projecting plane of sandwich construction on virtual three dimensions; Be identified for obtaining the unit of referenced viewpoints of the 3D shape of described subject; According to colouring information or monochrome information as the corresponding point on the corresponding described image of obtaining of the subpoint of the point on the described projecting plane, determine the colouring information of described subpoint or the unit of monochrome information; Calculate the unit of the degree of correlation between the corresponding point corresponding with described subpoint; And,, determine the unit that has probability as the probability of existence surface on described each subpoint based on the degree of correlation of described each subpoint for a plurality of subpoints that are seen as coincidence from described referenced viewpoints,
The unit that calculates the described degree of correlation has: prepare the unit of many groups as the shooting unit of the combination of several viewpoints of selecting from described a plurality of viewpoints; And obtain the unit of the degree of correlation according to the corresponding point on the image that comprises in the described unit of respectively making a video recording,
Determine that the described unit of probability that exists has: calculate the unit that has probability based on the degree of correlation of described each subpoint of obtaining at each described shooting unit; And carrying out, thereby the unit that has probability of definite described each subpoint at the definite overall treatment that has probability of each described shooting unit.
15. video generation device as claimed in claim 14 is characterized in that,
Calculating has based on the unit that has probability of the degree of correlation of described each subpoint of obtaining at each described shooting unit: the unit that calculates the metewand value according to the degree of correlation of described each subpoint of calculating at each described video camera batch total; Carry out described each subpoint of calculating at each described video camera batch total the metewand value statistical treatment and calculate the unit of the distribution function that has probability; And the unit that has probability of determining described each subpoint based on the described distribution function that has a probability.
16. as claim 14 or 15 described video generation devices, it is characterized in that,
The subject image generation unit of the image of the described subject that generation is seen from described observer's viewpoint is to mix colouring information or the monochrome information that is seen as each subpoint of coincidence from described observer's viewpoint according to the ratio corresponding with the described height that has a probability, thereby the colouring information or the monochrome information of the point on the image of determining to be generated, and generate the unit of a two dimensional image.
17. as claim 14 or 15 described video generation devices, it is characterized in that,
The subject image generation unit of the image of the described subject that generation is seen from described observer's viewpoint has: be seen as the different position of the degree of depth in the viewpoint from described observer and set the unit that a plurality of images generate faces; And be seen as described each subpoint of coincidence and the relation of the position between the point on described each image generation face based on viewpoint from described observer, with the colouring information or the monochrome information of each subpoint and to have probability transformation be the colouring information on described each image generation face or the unit of monochrome information and luminance distribution coefficient.
18. an image generating method has: obtain and change focusing and subject is taken and the step of a plurality of images of obtaining from coming; Setting is as the step of the virtual view of the viewpoint of observing the subject that is reflected on described a plurality of images; Obtain the step of the 3D shape of described subject according to described a plurality of images; And, generate the step of the image of the described subject of seeing from described virtual view based on the 3D shape of the described subject that obtains, it is characterized in that,
The step that obtains the 3D shape of described subject has: the step of setting the projecting plane of sandwich construction on virtual three dimensions; Be identified for obtaining the step of referenced viewpoints of the 3D shape of described subject; According to colouring information or monochrome information as the corresponding point on corresponding described each image obtained of the subpoint of the point on the described projecting plane, determine the colouring information of described subpoint or the step of monochrome information; According to the corresponding point corresponding focal power is determined the step to focal power of described subpoint with described subpoint; And for a plurality of subpoints that are seen as coincidence from described referenced viewpoints, based on described each subpoint to focal power, determine as with the corresponding distance in the position of described each subpoint on have the step that has probability of probability on the surface of described subject,
The step of the image of the described subject that generation is seen from described virtual view is according to mixing colouring information or the monochrome information that is seen as the subpoint of coincidence from described virtual view with the described corresponding ratio of probability that exists, thus the colouring information or the monochrome information of the each point on the image of determining to be generated.
19. image generating method as claimed in claim 18 is characterized in that,
Obtaining the step of 3D shape of described subject or the step that generates the image of the described subject of seeing from described virtual view has: based on the probability that exists of a plurality of subpoints that are seen as coincidence from described referenced viewpoints or described virtual view, on described each subpoint, set and have from penetrating into the step of a plurality of other transparencies of level that do not see through
The step of the image of the described subject that generation is seen from described virtual view according to mix colouring information or the monochrome information that is seen as a plurality of subpoints of coincidence from described virtual view based on the corresponding ratio of the described described transparency that exists probability to set, thereby the colouring information or the monochrome information of the each point on the image of determining to be generated.
20. image generating method as claimed in claim 19 is characterized in that,
The step of the image of the described subject that generation is seen from described virtual view is seen as subpoint far away towards near subpoint blend color information or monochrome information successively from described virtual view,
Divide in colouring information till each described subpoint or monochrome information are carried out according to colouring information that obtains in the hybrid processing of the ratio corresponding with described transparency till to the colouring information of this subpoint or monochrome information and the subpoint before it or monochrome information and obtain.
21. a video generation device has: obtain and change focusing and subject is taken and the subject image of a plurality of images of obtaining is obtained the unit from coming; Setting is as the virtual view setup unit of the virtual view of the viewpoint of observing the subject that is reflected on described a plurality of images; The 3D shape that obtains the 3D shape of described subject according to described a plurality of images obtains the unit; And, generate the rendering unit of the image of the described subject of seeing from described virtual view based on the 3D shape of the described subject that obtains, it is characterized in that,
Described 3D shape obtains the unit to be had: the unit of setting the projecting plane of sandwich construction on virtual three dimensions; Be identified for obtaining the unit of referenced viewpoints of the 3D shape of described subject; According to colouring information or monochrome information as the corresponding point on corresponding described each image obtained of the subpoint of the point on the described projecting plane, determine the colouring information of described subpoint or the unit of monochrome information; According to the corresponding point corresponding focal power is determined the unit to focal power of described subpoint with described subpoint; And for a plurality of subpoints that are seen as coincidence from described referenced viewpoints, based on described each subpoint to focal power, determine as with the corresponding distance in the position of described each subpoint on have the unit that has probability of probability on the surface of described subject,
Described rendering unit has mixing colouring information or the monochrome information that is seen as the subpoint of coincidence from described virtual view with the described corresponding ratio of probability that exists, thus the colouring information of the each point on the image of determining to be generated or the unit of monochrome information.
22. video generation device as claimed in claim 21 is characterized in that,
Described 3D shape obtains the unit or described rendering unit has: based on the probability that exists of a plurality of subpoints that are seen as coincidence from described referenced viewpoints or described virtual view, on described each subpoint, set and have from penetrating into the unit of a plurality of other transparencies of level that do not see through
Described rendering unit have according to mix colouring information or the monochrome information that is seen as a plurality of subpoints of coincidence from described virtual view based on the corresponding ratio of the described described transparency that exists probability to set, thereby the colouring information of the each point on the image of determining to be generated or the unit of monochrome information.
23. video generation device as claimed in claim 22 is characterized in that,
Described rendering unit has following unit: be seen as subpoint far away towards near subpoint blend color information or monochrome information successively from described virtual view, for colouring information or the monochrome information till each described subpoint, obtain by dividing in carrying out with the colouring information that obtains in the hybrid processing of the ratio corresponding till or monochrome information to the colouring information of this subpoint or monochrome information and the subpoint before it with described transparency.
24. an image generating method has: the step that obtains under different conditions a plurality of images of subject being taken and obtaining; Obtain the step of the 3D shape of described subject according to described a plurality of images; And, generate the step of the image of the described subject of seeing from described observer's viewpoint based on the 3D shape of the described subject that obtains, it is characterized in that,
The step that obtains the 3D shape of described subject has: the step of setting the projecting plane of sandwich construction on virtual three dimensions; Be identified for obtaining the step of referenced viewpoints of the 3D shape of described subject; According to colouring information or monochrome information as the corresponding point on the corresponding described image of obtaining of the subpoint of the point on the described projecting plane, determine the colouring information of described subpoint or the step of monochrome information; And, determine the step that has probability as the probability on the surface that on described each subpoint, has subject for a plurality of subpoints that are seen as coincidence from described referenced viewpoints,
Determine that the described step of probability that exists has: the step of calculating the metewand value of described each subpoint according to the image information of described corresponding point; Carry out the step of statistical treatment of the metewand value of described each subpoint; And the step that has probability of calculating described each subpoint based on the metewand value of having carried out described statistical treatment.
25. image generating method as claimed in claim 24 is characterized in that,
The step that obtains described a plurality of images obtains from a plurality of different viewpoints and described subject is taken and the image that obtains,
Determine that the described step of probability that exists has: the step of obtaining the degree of correlation between the corresponding point corresponding with described subpoint; Calculating is based on the step of the metewand value of the degree of correlation of described each subpoint; Carry out the step of the statistical treatment of described metewand value; And the step that has probability of calculating described each subpoint based on the metewand value of having carried out described statistical treatment.
26. image generating method as claimed in claim 24 is characterized in that,
The step that obtains described a plurality of images obtain from a viewpoint change focusing from and subject is taken the image that obtains,
Determine that the described step of probability that exists has: focal power is calculated the step to focal power of described subpoint according to the corresponding point corresponding with described subpoint; Calculating is based on the step to the metewand value of focal power of described each subpoint; Carry out the step of the statistical treatment of described metewand value; And the step that has probability of calculating described each subpoint based on the metewand value of having carried out described statistical treatment.
27. image generating method as claimed in claim 24 is characterized in that,
The step that obtains described a plurality of images obtain from a plurality of viewpoints described subject is taken and the image that obtains and the more than one viewpoint from described a plurality of viewpoints change focusing from and described subject is taken the image that obtains,
Determine that the described step of probability that exists has: the step of obtaining the degree of correlation between the corresponding point on described subpoint a plurality of images different with described viewpoint; Calculating is based on the step of the first metewand value of the degree of correlation of described each subpoint; Carry out the step of the statistical treatment of the described first metewand value; According to the corresponding point of described focusing on different images of taking focal power is calculated the step to focal power of described subpoint from each viewpoint; Calculating is based on the step to the second metewand value of focal power of described each subpoint; Carry out the step of the statistical treatment of the described second metewand value; And the step that has probability of calculating described each subpoint based on the first metewand value of having carried out described statistical treatment and the second metewand value.
28. any described image generating method as in the claim 24 to 27 is characterized in that,
The step of the image of the described subject that generation is seen from described observer's viewpoint is mixed colouring information or the monochrome information that is seen as each subpoint of coincidence from described observer according to the ratio corresponding with the described height that has a probability, thereby the colouring information or the monochrome information of the point on the image of determining to be generated, and generate a two dimensional image.
29. any described image generating method as in the claim 24 to 27 is characterized in that,
The step of the image of the described subject that generation is seen from described observer's viewpoint has: be seen as the different position of the degree of depth in the viewpoint from described observer and set the step that a plurality of images generate faces; And be seen as described each subpoint of coincidence and the relation of the position between the point on described each image generation face based on viewpoint from described observer, with the colouring information or the monochrome information of each subpoint and to have probability transformation be the colouring information on described each image generation face or the step of monochrome information and luminance distribution coefficient.
30. a video generation device has: the subject image of obtaining under different conditions a plurality of images of subject being taken and obtaining is obtained the unit; The subject shape that obtains the 3D shape of described subject according to described a plurality of images obtains the unit; And, generate the subject image generation unit of the image of the described subject of seeing from described observer's viewpoint based on the 3D shape of the described subject that obtains, it is characterized in that,
Described subject shape obtains the unit to be had: the unit of setting the projecting plane of sandwich construction on virtual three dimensions; Be identified for obtaining the unit of referenced viewpoints of the 3D shape of described subject; According to colouring information or monochrome information as the corresponding point on the corresponding described image of obtaining of the subpoint of the point on the described projecting plane, determine the colouring information of described subpoint or the unit of monochrome information; And, determine the unit that has probability as the probability on the surface that on described each subpoint, has subject for a plurality of subpoints that are seen as coincidence from described referenced viewpoints,
Determine that the described unit of probability that exists has: the unit that calculates the metewand value of described each subpoint according to the image information of described corresponding point; Carry out the unit of statistical treatment of the metewand value of described each subpoint; And the unit that has probability that calculates described each subpoint based on the metewand value of having carried out described statistical treatment.
31. video generation device as claimed in claim 30 is characterized in that,
Described subject image is obtained the unit and is obtained from a plurality of different viewpoints and described subject is taken and the image that obtains,
Determine that the described unit of probability that exists has: the unit of obtaining the degree of correlation between the corresponding point corresponding with described subpoint; Calculating is based on the unit of the metewand value of the degree of correlation of described each subpoint; Carry out the unit of the statistical treatment of described metewand value; And the unit that has probability that calculates described each subpoint based on the metewand value of having carried out described statistical treatment.
32. video generation device as claimed in claim 30 is characterized in that,
Described subject image obtain the unit obtain from viewpoint change focusing from and described subject is taken the image that obtains,
Determine that the described unit of probability that exists has: focal power is calculated the unit to focal power of described subpoint according to the corresponding point corresponding with described subpoint; Calculating is based on the unit to the metewand value of focal power of described each subpoint; Carry out the unit of the statistical treatment of described metewand value; And the unit that has probability that calculates described each subpoint based on the metewand value of having carried out described statistical treatment.
33. video generation device as claimed in claim 30 is characterized in that,
Described subject image obtain the unit obtain from a plurality of viewpoints described subject is taken and the image that obtains and the more than one viewpoint from described a plurality of viewpoints change focusing from and described subject is taken the image that obtains,
Determine that the described unit of probability that exists has: the unit of obtaining the degree of correlation between the corresponding point on described subpoint a plurality of images different with described viewpoint; Calculating is based on the unit of the first metewand value of the degree of correlation of described each subpoint; Carry out the unit of the statistical treatment of the described first metewand value; According to the corresponding point of described focusing on different images of taking focal power is calculated the unit to focal power of described subpoint from each viewpoint; Calculating is based on the unit to the second metewand value of focal power of described each subpoint; Carry out the unit of the statistical treatment of the described second metewand value; And the unit that has probability that calculates described each subpoint based on the first metewand value of having carried out described statistical treatment and the second metewand value.
34. any described video generation device as in the claim 30 to 33 is characterized in that,
The subject image generation unit of the image of the described subject that generation is seen from described observer's viewpoint mixes colouring information or the monochrome information that is seen as each subpoint of coincidence from described observer according to the ratio corresponding with the described height that has a probability, thereby the colouring information or the monochrome information of the point on the image of determining to be generated, and generate a two dimensional image.
35. any described video generation device as in the claim 30 to 33 is characterized in that,
The subject image generation unit of the image of the described subject that generation is seen from described observer's viewpoint has: be seen as the different position of the degree of depth in the viewpoint from described observer and set the unit that a plurality of images generate faces; And be seen as described each subpoint of coincidence and the relation of the position between the point on described each image generation face based on viewpoint from described observer, with the colouring information or the monochrome information of each subpoint and to have probability transformation be the colouring information on described each image generation face or the unit of monochrome information and luminance distribution coefficient.
36. a three-dimensional image display method has: the step that obtains under different conditions a plurality of images of subject being taken and obtaining; Obtain the step of the 3D shape of described subject according to described a plurality of images; Set the observer and observe the step that is seen as the viewpoint position of a plurality of picture display faces that are in different depth position from described observer; Be created on the step of the two dimensional image that shows on described each picture display face based on the 3D shape of the described subject that obtains; And by showing that on described each display surface the two dimensional image of described generation illustrates the step of the three-dimensional image of described subject, it is characterized in that,
The step that obtains the 3D shape of described subject has: the step of setting the projecting plane of sandwich construction on virtual three dimensions; Be identified for obtaining the step of referenced viewpoints of the 3D shape of described subject; According to colouring information or monochrome information as the corresponding point on the corresponding described image of obtaining of the subpoint of the point on the described projecting plane, determine the colouring information of described subpoint or the step of monochrome information; And, determine the step that has probability as the probability on the surface that on described each subpoint, has subject for a plurality of subpoints that are seen as coincidence from described referenced viewpoints,
The step that generates described two dimensional image is with the colouring information or the monochrome information of described subpoint and to have probability transformation be the colouring information or the monochrome information of display dot and have probability, thereby generate described two dimensional image, wherein said display dot is the point on the described picture display face corresponding with the projecting plane that has described subpoint
Illustrate described subject three-dimensional image step with described colouring information or the monochrome information that exists the corresponding brightness of probability to show described each display dot.
37. three-dimensional image display method as claimed in claim 36 is characterized in that,
The step that obtains described a plurality of images obtains from a plurality of different viewpoints and described subject is taken and the image that obtains,
Determine that the described step of probability that exists has: the step of obtaining the degree of correlation between the corresponding point corresponding with described subpoint; And, determine the step that has probability of described each subpoint based on the height of the degree of correlation of described each subpoint for a plurality of subpoints that are seen as coincidence from described referenced viewpoints.
38. three-dimensional image display method as claimed in claim 36 is characterized in that,
The step that obtains described a plurality of images obtain from a viewpoint change focusing from and described subject is taken the image that obtains,
Determine that the described step of probability that exists has: focal power is calculated the step to focal power of described subpoint according to the corresponding point corresponding with described subpoint; And, the height of focal power is determined the step that has probability of described each subpoint based on described each subpoint for a plurality of subpoints that are seen as coincidence from described referenced viewpoints.
39. three-dimensional image display method as claimed in claim 36 is characterized in that,
The step that obtains described a plurality of images obtain from a plurality of viewpoints described subject is taken and the image that obtains and the more than one viewpoint from described a plurality of viewpoints change focusing from and described subject is taken the image that obtains,
Determine that the described step of probability that exists has: the step of obtaining the degree of correlation between the corresponding point on the described subpoint image different with described viewpoint; Calculate the step to focal power of the corresponding point of focusing on different images of described each viewpoint; And for a plurality of subpoints that are seen as coincidence from described referenced viewpoints, based on the height and the described step that has probability of the height of focal power being determined described each subpoint of the described degree of correlation of described each subpoint.
40. a 3-D image display device has: the subject image of obtaining under different conditions a plurality of images of subject being taken and obtaining is obtained the unit; The 3D shape that obtains the 3D shape of described subject according to described a plurality of images obtains the unit; Set the observer and observe the observer's viewpoint setup unit that is positioned at the viewpoint position of a plurality of picture display faces on the different depth position In the view of described observer; And the two dimensional image generation unit that is created on the two dimensional image that shows on described each picture display face based on the 3D shape of the described subject that obtains, by showing that on described each display surface the two dimensional image of described generation illustrates the three-dimensional image of described subject, it is characterized in that
Described 3D shape obtains the unit to be had: the unit of setting the projecting plane of sandwich construction on virtual three dimensions; Be identified for obtaining the unit of referenced viewpoints of the 3D shape of described subject; According to colouring information or monochrome information as the corresponding point on the corresponding described image of obtaining of the subpoint of the point on the described projecting plane, determine the colouring information of described subpoint or the unit of monochrome information; And, determine the unit that has probability as the probability on the surface that on described each subpoint, has subject for a plurality of subpoints that are seen as coincidence from described referenced viewpoints,
Described two dimensional image generation unit is with the colouring information or the monochrome information of described subpoint and to have probability transformation be the colouring information or the monochrome information of display dot and have probability, thereby generate described two dimensional image, wherein said display dot is the point on the described picture display face corresponding with the projecting plane that has described subpoint
With with described colouring information or the monochrome information that exists the corresponding brightness of probability to show described each display dot.
41. 3-D image display device as claimed in claim 40 is characterized in that,
It is to obtain from a plurality of different points of view described subject to be taken and the unit of the image that obtains that described subject image is obtained the unit,
Determine that the described unit of probability that exists has: the unit of obtaining the degree of correlation between the corresponding point corresponding with described subpoint; And, determine the unit that has probability of described each subpoint based on the height of the degree of correlation of described each subpoint for a plurality of subpoints that are seen as coincidence from described referenced viewpoints.
42. 3-D image display device as claimed in claim 40 is characterized in that,
Described subject image obtain the unit be obtain from viewpoint change focusing from and described subject is taken the unit of the image that obtains,
Determine that the described unit of probability that exists has: focal power is calculated the unit to focal power of described subpoint according to the corresponding point corresponding with described subpoint; And, the height of focal power is determined the unit that has probability of described each subpoint based on described each subpoint for a plurality of subpoints that are seen as coincidence from described referenced viewpoints.
43. 3-D image display device as claimed in claim 40 is characterized in that,
Described subject image obtain the unit be obtain from a plurality of viewpoints described subject is taken and the image that obtains and the more than one viewpoint from described a plurality of viewpoints change focusing from and described subject is taken the unit of the image that obtains,
Determine that the described unit of probability that exists has: the unit of obtaining the degree of correlation between the corresponding point on the described subpoint image different with described viewpoint; Calculate the unit to focal power of the corresponding point of focusing on different images of described each viewpoint; And for a plurality of subpoints that are seen as coincidence from described referenced viewpoints, based on the height and the described unit that has probability of the height of focal power being determined described each subpoint of the described degree of correlation of described each subpoint.
CNB200480017333XA 2003-06-20 2004-06-18 Virtual visual point image generating method and three-dimensional image display method and device Expired - Lifetime CN100573595C (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP176778/2003 2003-06-20
JP2003176778 2003-06-20
JP016559/2004 2004-01-26
JP016551/2004 2004-01-26
JP016832/2004 2004-01-26
JP016831/2004 2004-01-26

Publications (2)

Publication Number Publication Date
CN1809844A CN1809844A (en) 2006-07-26
CN100573595C true CN100573595C (en) 2009-12-23

Family

ID=36840971

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB200480017333XA Expired - Lifetime CN100573595C (en) 2003-06-20 2004-06-18 Virtual visual point image generating method and three-dimensional image display method and device

Country Status (1)

Country Link
CN (1) CN100573595C (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110207614A (en) * 2019-05-28 2019-09-06 南京理工大学 One kind being based on the matched high-resolution high precision measuring system of doubly telecentric camera and method

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1909162A1 (en) * 2006-10-02 2008-04-09 Koninklijke Philips Electronics N.V. System for virtually drawing on a physical surface
CN101662695B (en) * 2009-09-24 2011-06-15 清华大学 Method and device for acquiring virtual viewport
CN102324107B (en) * 2011-06-15 2013-07-24 中山大学 Pervasive-terminal-oriented continuous and multi-resolution encoding method of three-dimensional grid model
US9615802B2 (en) * 2012-03-26 2017-04-11 Koninklijke Philips N.V. Direct control of X-ray focal spot movement
JP5370542B1 (en) * 2012-06-28 2013-12-18 カシオ計算機株式会社 Image processing apparatus, imaging apparatus, image processing method, and program
TW201403545A (en) * 2012-07-13 2014-01-16 Vivotek Inc A synthetic virtual perspective image processing systems and method
EP2806401A1 (en) * 2013-05-23 2014-11-26 Thomson Licensing Method and device for processing a picture
CN103337095B (en) * 2013-06-25 2016-05-18 桂林理工大学 The tridimensional virtual display methods of the three-dimensional geographical entity of a kind of real space
CN106464853B (en) * 2014-05-21 2019-07-16 索尼公司 Image processing equipment and method
CN105204618B (en) * 2015-07-22 2018-03-13 深圳多新哆技术有限责任公司 The method for displaying projection and device of virtual article in Virtual Space
CN107180406B (en) 2016-03-09 2019-03-15 腾讯科技(深圳)有限公司 Image processing method and equipment
JP6743893B2 (en) * 2016-08-10 2020-08-19 ソニー株式会社 Image processing apparatus and image processing method
CN107315470B (en) * 2017-05-25 2018-08-17 腾讯科技(深圳)有限公司 Graphic processing method, processor and virtual reality system
JP6433559B1 (en) 2017-09-19 2018-12-05 キヤノン株式会社 Providing device, providing method, and program
CN109949423A (en) * 2019-02-28 2019-06-28 华南机械制造有限公司 Three-dimensional visualization shows exchange method, device, storage medium and terminal device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110207614A (en) * 2019-05-28 2019-09-06 南京理工大学 One kind being based on the matched high-resolution high precision measuring system of doubly telecentric camera and method
CN110207614B (en) * 2019-05-28 2020-12-04 南京理工大学 High-resolution high-precision measurement system and method based on double telecentric camera matching

Also Published As

Publication number Publication date
CN1809844A (en) 2006-07-26

Similar Documents

Publication Publication Date Title
CN100573595C (en) Virtual visual point image generating method and three-dimensional image display method and device
KR100779634B1 (en) Virtual visual point image generating method and 3-d image display method and device
US7643025B2 (en) Method and apparatus for applying stereoscopic imagery to three-dimensionally defined substrates
Campos et al. A surface reconstruction method for in-detail underwater 3D optical mapping
US20120182403A1 (en) Stereoscopic imaging
CN103345771A (en) Efficient image rendering method based on modeling
JP7247276B2 (en) Viewing Objects Based on Multiple Models
KR20080051158A (en) Photographing big things
US20040222989A1 (en) System and method for feature-based light field morphing and texture transfer
Knabb et al. Scientific visualization, 3D immersive virtual reality environments, and archaeology in Jordan and the Near East
Berger et al. Depth from stereo polarization in specular scenes for urban robotics
Wegman et al. 26 Statistical graphics and visualization
Veas et al. Techniques for view transition in multi-camera outdoor environments
AU2004306226B2 (en) Stereoscopic imaging
Jung et al. Flexibly connectable light field system for free view exploration
Lin Automatic 3D color shape measurement system based on a stereo camera
CN114926612A (en) Aerial panoramic image processing and immersive display system
Huang et al. Performance enhanced elemental array generation for integral image display using pixel fusion
CN111753644A (en) Method and device for detecting key points on three-dimensional face scanning
CN103530869A (en) System and method for matching move quality control
Slabaugh Novel volumetric scene reconstruction methods for new view synthesis
CN101566784A (en) Method for establishing depth of field data for three-dimensional image and system thereof
CN114820916B (en) GPU-based three-dimensional dense reconstruction method for large scene
Yan et al. MSI-NeRF: Linking Omni-Depth with View Synthesis through Multi-Sphere Image aided Generalizable Neural Radiance Field
Kunita et al. Layered probability maps: basic framework and prototype system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CX01 Expiry of patent term

Granted publication date: 20091223