Summary of the invention
In view of this, the object of this invention is to provide a kind of dynamic chart pasting method based on projection, the method quick, easyly can produce the changeful three-dimensional model of pinup picture, and the plane material in scene can be made to have the effect of the depth of field.
Realize technical scheme of the present invention as follows:
Based on a dynamic chart pasting method for projection, detailed process is:
The first step, is mapped to the uv coordinate in dynamic pinup picture by each summit on object model in scene;
Second step, draws dynamic pinup picture, to be projected to by drawn dynamic pinup picture in scene on object model;
3rd step, according to the required scene depth scope represented, sets the distance between axles B between two virtual video cameras, guarantees that in scene, all objects are in suitable depth range, and adjust the stereoscopic sensation of object;
Described suitable depth range be [D "
1, D "
2]:
D“
1=L/(1+(2.4W×C+E)×L/(2W×F×B)) (6)
D“
2=L/(1-(2.4W×C+E)×L/(2W×F×B)) (7)
Wherein, C=0.0174, W are projection screen width, and E is the distance between people two, and F is the focal length of virtual video camera, and L converges the distance of identity distance from virtual video camera;
4th step, for scene midplane material, does shift transformation to its imaging in two virtual video cameras respectively, makes plane material have the depth of field.
Further, the detailed process of the 3rd step of the present invention is:
(1) first render the degree of depth pinup picture depth map of scene, namely to obtain in scene each object relative to the distance of virtual video camera, thus obtain the depth range [D of all objects model in scene
1, D
2];
(2) at depth range [D
1, D
2] in, determine the object model represented needed in scene depth range [D '
1, D '
2], according to described depth range [D '
1, D '
2] calculate distance between axles between theoretic two virtual video cameras, according to this distance between axles calculate depth range suitable in scene [D "
1, D "
2], if [D '
1, D '
2] be not in depth range [D "
1, D "
2] within, then readjust distance between axles between two virtual video cameras until [D '
1, D '
2] be in depth range [D "
1, D "
2] in till; According to the stereoscopic sensation of object model in the required scene represented, the thickness of adjustment object model.
Further, the detailed process of step of the present invention (2) is:
First, at depth range [D
1, D
2] in, the depth range of the scene represented needed for determining [D '
1, D '
2], according to described depth range [D '
1, D '
2], utilize formula (1) to calculate distance between axles B between theoretic two virtual video cameras,
B=P/(F×(1/n-1/m)) (1)
P=2×0.0174×width
Wherein, F is the focal length of virtual video camera, n=D '
1, m=D '
2, width is virtual film size set on virtual video camera, and unit is mm;
Then, according to the convergence identity distance of the current setting distance L from virtual video camera, according to formula (6) and (7) calculate depth range [D "
1, D "
2], if [D '
1, D '
2] be not in depth range [D "
1, D "
2] within, then readjust distance between axles B between two virtual video cameras until [D '
1, D '
2] be in depth range [D "
1, D "
2] in till;
D“
1=L/(1+(2.4W×C+E)×L/(2W×F×B)) (6)
D“
2=L/(1-(2.4W×C+E)×L/(2W×F×B)) (7)
Wherein, C=0.0174, W are projection screen width, and E is the distance between people two;
Finally, according to formula (8), calculate scene meta in depth value be f place, thickness is the stereoscopic sensation Depth_angle of the object model of depth, if the stereoscopic sensation of object model is inadequate, then this object model shifted near virtual video camera or increase distance between axles B, vice versa, wherein increase or reduce distance between axles B time need ensure depth range [D '
1, D '
2] be in depth range [D "
1, D "
2] in;
Depth_angle=1.67×F×B×depth/((f+depth)×f) (8)
Further, the detailed process of the 4th step of the present invention is:
According to the distance d of plane material distance virtual video camera, if the width of final rendering image is Size pixel, then plane material displacement sd of imaging in two virtual video cameras is
sd=Size×z×(d-L)/(2d)
z=F×B/L
Wherein, the unit of sd is pixel;
To imaging to the left translation sd the pixel of plane material in the virtual video camera of left side, if sd is less than 0, then translation to the right | sd| pixel; To imaging to the right translation sd the pixel of plane material in the virtual video camera of right side, if sd is less than 0, then translation to the left | sd| pixel.
Beneficial effect:
The first, the distance between axles that the present invention can determine between suitable virtual video camera according to the required scene depth scope represented, ensure spectators watch comfortable while show maximum stereoeffect.
The second, the present invention is by carrying out translation adjustment to the imaging of plane material on two virtual video cameras, and three-dimensional film manufacturing process midplane material has the effect of the depth of field.
Embodiment
For making the object of the application, technical scheme and advantage clearly, below in conjunction with the application's specific embodiment and corresponding accompanying drawing, technical scheme is clearly and completely described.Obviously, described embodiment is only some embodiments of the present application, instead of whole embodiments.Based on the embodiment in the application, those of ordinary skill in the art are not making the every other embodiment obtained under creative work prerequisite, all belong to the scope of the application's protection.
Below in conjunction with accompanying drawing, describe the technical scheme that each embodiment of the application provides in detail.
As shown in Figure 1, a kind of dynamic chart pasting method based on projection, detailed process is:
The first step, is mapped to the uv coordinate in dynamic pinup picture by each summit on object model in scene.
In the present embodiment, because need to produce the effect that time brings great changes to the world, there is larger deformation effect in the background model such as mountain and cloud.Project on the model of object to ensure that dynamic pinup picture is correct all the time, for each summit on object model, obtain its coordinate in camera coordinate system, be transformed into video camera rule through perspective transform (Perspective Projection) and observe in body (Canonical View Volume) space.Within this space, for being positioned at [-1,1] 3 cube in summit, then think that it at virtual video camera within sweep of the eye, therefore perspective division is utilized to obtain normalized device coordinate (Normalized DeviceCoordinates), using this coordinate as the uv coordinate in this vertex correspondence pinup picture; For the summit not in the cube of [-1,1] 3, then think that it is not at virtual video camera within sweep of the eye, does not therefore arrange uv coordinate.The manual uv that this step can be carried out in traditional fabrication flow process automatically maps.
Second step, draws dynamic pinup picture, is projected in scene on object model by drawn dynamic pinup picture, thus the color in scene on object model is changed in time, realizes the effect of animation teacher setting.
Set because object model in scene converts in time, according to the mapping relations that step one is determined, namely can be the dynamic pinup picture that modeling rendering goes out every frame, then dynamic for every frame pinup picture is projected on the object model in scene, realizes scene and change in time.
Because relate on the model that projects in scene, so at 2 D animation software, when drawing in such as Flash Software for producing, the profile along model is needed outwards suitably to exceed part.
3rd step, according to the required scene depth scope represented, sets the distance between axles between two virtual video cameras, guarantees that in scene, all objects are in suitable depth range, and adjust the stereoscopic sensation of object.
(1) first render the degree of depth pinup picture depth map of scene, namely obtain each object in scene and, relative to the distance of virtual video camera, obtain the depth range [D of all objects model in scene
1, D
2].
(2) at depth range [D
1, D
2] in, in the scene represented needed for determining object model depth range [D '
1, D '
2]; According to described depth range [D '
1, D '
2] calculate distance between axles between theoretic two virtual video cameras; When the distance between axles on two virtual video cameras is determined, corresponding to it spectators can not be allowed to feel dizzy depth range is determined, for not having sense of discomfort when guaranteeing spectators' viewing, according to theoretic distance between axles calculate depth range suitable in scene [D "
1, D "
2], if [D '
1, D '
2] be not in depth range [D "
1, D "
2] within, then the depth range illustrating in scene residing for object model has exceeded the depth range corresponding to the comfortable perception of spectators, now need to readjust distance between axles B between two virtual video cameras until [D '
1, D '
2] be in depth range [D "
1, D "
2] in till, according to the stereoscopic sensation of object model in required scene, adjustment object model thickness.
The concrete computation process of this step (2) is:
First, at depth range [D
1, D
2] in, the depth range of the scene represented needed for determining [D '
1, D '
2], according to described depth range [D '
1, D '
2], utilize formula (1) to calculate distance between axles B between theoretic two virtual video cameras, then adjust according to scene requirement, if increase B, then the stereoscopic sensation of current scene is stronger, if reduce B, then the stereoscopic sensation of current scene weakens.
B=P/(F×(1/n-1/m)) (1)
P=2×0.0174×width
Wherein, F is the focal length of virtual video camera, n=D '
1, m=D '
2, width is virtual film size set on virtual video camera, and unit is mm;
Then, according to the convergence identity distance of the current setting distance L from virtual video camera, according to formula (6) and (7) calculate depth range [D "
1, D "
2], if [D '
1, D '
2] be not in depth range [D "
1, D "
2] within, then readjust distance between axles B between two virtual video cameras until [D '
1, D '
2] be in depth range [D "
1, D "
2] in till, can ensure that spectators do not exist sense of discomfort in the process of viewing like this.
D“
1=L/(1+(2.4W×C+E)×L/(2W×F×B)) (6)
D“
2=L/(1-(2.4W×C+E)×L/(2W×F×B)) (7)
Wherein, C=0.0174, W are projection screen width, and E is the distance between people two.
Finally, according to formula (8), calculate scene meta in depth value be f place, thickness is the stereoscopic sensation Depth_angle of the object model of depth, if the stereoscopic sensation of object model is inadequate, this object model can be shifted near virtual video camera or increase distance between axles B, vice versa, wherein increase or reduce distance between axles B time need ensure depth range [D '
1, D '
2] be in depth range [D "
1, D "
2] in;
Depth_angle=1.67×F×B×depth/((f+depth)×f) (8)
4th step, for scene midplane material, does shift transformation to its imaging in two virtual video cameras respectively, makes plane material have the depth of field.
According to the distance d of plane material distance virtual video camera, converging the front and back of identity distance from the distance L of virtual video camera, if the width of final rendering image is Size pixel, then plane material displacement sd of imaging in two virtual video cameras is
sd=Size×z×(d-L)/(2d)
z=F×B/L
Wherein, the unit of sd is pixel;
As shown in Figure 2, to imaging to the left translation sd the pixel of plane material in the virtual video camera of left side, if sd is less than 0, then translation to the right | sd| pixel; To this plane material imaging in the virtual video camera of right side translation sd pixel to the right, if sd is less than 0, then translation to the left | sd| pixel.
Below the derivation of formula (1)-(8) is briefly described:
According to camera lens and the seat in the plane of animation director setting, and scene want the depth range of the object represented, namely nearest object is at distance video camera n place, and object is farthest at distance video camera m place, then the theoretical distance between axles B meeting two virtual video cameras of above requirement under virtual video camera model is
B=P/(F×(1/n-1/m)) (1)
P=2×0.0174×width
Wherein, F is the focal length of virtual video camera, is generally F=28.9mm, and width is virtual film size set on virtual video camera, and unit is mm, for common 35mm cartridge film P=1.2.
If parallax free face (convergence face) is L to the distance of video camera, then relative distance (value before the converging adjustment) z of the some imaging in right and left eyes on parallax free face is
z=F×B/L; (2)
If the object before being positioned at convergence face in depth range set in scene is dn to the distance of virtual video camera; If the object after being positioned at convergence face in depth range set in scene is df to the distance of virtual video camera; In d-making instrument, on virtual video camera, the length of virtual backsheet is unit 1; Relative distance (value after converging face adjustment) Delta_n and Delta_f of the imaging point of the object before and after convergence face on two virtual video camera egative films is respectively:
Delta_n=z×(L-dn)/dn (3)
Delta_f=z×(df-L)/df; (4)
If the relative distance of the imaging point of object on two virtual video camera egative films (value after converging adjustment) is delta, the imaging point of this object is finally projected onto on the giant-screen that width is W rice, then the right and left eyes parallax radian that final first row spectators see is:
Wherein, E is the distance between people two to delta_a=(2 × delta × W-E)/(1.2 × W) (5), and be grown up as 0.065m, children are about 0.055m.According to the usual setting of cinema, the position of first row is usually in distance screen 0.6W distance.
Because the eyes of people carry out judgment object distance by converging sight line, after in right and left eyes, the differential seat angle of image objects is greater than 2 degree, the separation of sight line can be caused and non-aggregate, that namely causes on sense organ is uncomfortable, so delta_a should be less than 2 degree, radian value C=π/180 once, approximate 0.01745.So delta_a<=2C.
Formula (3) and (4) are substituted into formula (5) as delta, can calculate and be respectively by avoiding sight line to be separated receptible object minimum distance Sn and maximum distance Sf:
Sn=L/(1+(2.4W×C+E)×L/(2W×F×B)) (6)
Sf=L/ (1-(2.4W × C+E) × L/ (2W × F × B)) (7) wherein, D "
1=Sn, D "
2the unit of=Sf, Sn and Sf is the unit set in three-dimensional animation production instrument, is generally cm, also can be m etc.As the distance d of object distance virtual video camera, time not in [Sn, Sf] scope, then the differential seat angle of the imaging of object right and left eyes can cause the separation of people's sight line, thus causes uncomfortable.
Finally, if object is at distance virtual video camera distance f place, thickness is depth, f is at [L, Sf] in scope (because at [Sn, L] object in scope has larger stereoscopic sensation and differential seat angle certainly), utilize formula (4), then the difference (i.e. stereoscopic sensation) putting the parallax radian finally presented before and after this object is:
Depth_angle=1.67×F×B×depth/((f+depth)×f) (8)
Because the angle of minimum resolution of human eye is 1 jiao point, approximate 0.000291, if so the timesharing of Depth_angle<1 angle, then object can seem do not have depth information, be a plane, as can be seen here, object distance video camera is far away or distance between axles is less, then the difference of parallax radian is less, so can represent the stereoscopic sensation of object with Depth_angle.