CN104599308A - Projection-based dynamic mapping method - Google Patents

Projection-based dynamic mapping method Download PDF

Info

Publication number
CN104599308A
CN104599308A CN201510062058.3A CN201510062058A CN104599308A CN 104599308 A CN104599308 A CN 104599308A CN 201510062058 A CN201510062058 A CN 201510062058A CN 104599308 A CN104599308 A CN 104599308A
Authority
CN
China
Prior art keywords
distance
depth
scene
virtual video
depth range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510062058.3A
Other languages
Chinese (zh)
Other versions
CN104599308B (en
Inventor
朱承昊
李然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing section skill has appearance science and technology limited Company
Original Assignee
Beijing Section Skill Has Appearance Science And Technology Ltd Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Section Skill Has Appearance Science And Technology Ltd Co filed Critical Beijing Section Skill Has Appearance Science And Technology Ltd Co
Priority to CN201510062058.3A priority Critical patent/CN104599308B/en
Publication of CN104599308A publication Critical patent/CN104599308A/en
Application granted granted Critical
Publication of CN104599308B publication Critical patent/CN104599308B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a projection-based dynamic mapping method. The method particularly includes: step one, mapping each vertex on an object model in a scene to uv coordinates in a dynamic map; step two, drawing the dynamic map, and projecting the drawn dynamic map onto the object model in the scene; step three, according to a depth range of the to-be-shown scene, setting an axial distance between two virtual cameras to guarantee that all objects in the scene are within a proper depth range, and adjusting three dimensions of the objects; step four, aiming at planar materials in the scene, respectively subjecting images, in the two virtual cameras, of the planar materials to displacement transformation to enable depth of field of the planar materials. The projection-based dynamic mapping method has the advantages that changeful three-dimensional models of maps can be made quickly, simply and conveniently, and the effect of depth of field of the planar materials in the scene can be achieved.

Description

A kind of dynamic chart pasting method based on projection
Technical field
The invention belongs to three-dimensional animation technical field, be specifically related to a kind of dynamic chart pasting method based on projection.
Background technology
Three-dimensional animation technology is a kind of emerging technology produced along with the development of computer hardware technique in recent years, can produce three-dimensional animation or the video display basic lens of satisfied director's demand quickly and easily.
The producing principle of three-dimensional animation is generally: first, utilizes Three-dimensional Animation Software (such as 3ds Max, Maya or Houdini) to set up a virtual world in a computer; Then, in the three-dimensional world that this is virtual, the three-dimensional model such as scene and three-dimensional cartoon character is added; Finally, the animation curve of setting model, the movement locus of virtual camera and other animation parameters, play up and obtain role animation.
Due to three-dimensional animation technology possess can the true scene of accurate analog, almost without feature such as creation restriction etc., be widely used in the numerous areas such as amusement, education, military affairs at present.
For film industry, increasing film adopts 3D fabrication techniques now, gives the outstanding experience that spectators are on the spot in person.But 3D is relative to 2D film, technical difficulty and workload larger.Such as, the later stage synthesis in often can use plane material just cannot with 3D technical compatibility; The spendable numeral in 2D film medium long shot place connects scape technology, just needs pinup picture uv to be mapped in background model in 3D makes, and just can render three-dimensional effect; Consider the visual line of sight of people's right and left eyes, can perception the depth of field also existence range restriction, can come over dizzy after exceeding uncomfortable.
Summary of the invention
In view of this, the object of this invention is to provide a kind of dynamic chart pasting method based on projection, the method quick, easyly can produce the changeful three-dimensional model of pinup picture, and the plane material in scene can be made to have the effect of the depth of field.
Realize technical scheme of the present invention as follows:
Based on a dynamic chart pasting method for projection, detailed process is:
The first step, is mapped to the uv coordinate in dynamic pinup picture by each summit on object model in scene;
Second step, draws dynamic pinup picture, to be projected to by drawn dynamic pinup picture in scene on object model;
3rd step, according to the required scene depth scope represented, sets the distance between axles B between two virtual video cameras, guarantees that in scene, all objects are in suitable depth range, and adjust the stereoscopic sensation of object;
Described suitable depth range be [D " 1, D " 2]:
D“ 1=L/(1+(2.4W×C+E)×L/(2W×F×B)) (6)
D“ 2=L/(1-(2.4W×C+E)×L/(2W×F×B)) (7)
Wherein, C=0.0174, W are projection screen width, and E is the distance between people two, and F is the focal length of virtual video camera, and L converges the distance of identity distance from virtual video camera;
4th step, for scene midplane material, does shift transformation to its imaging in two virtual video cameras respectively, makes plane material have the depth of field.
Further, the detailed process of the 3rd step of the present invention is:
(1) first render the degree of depth pinup picture depth map of scene, namely to obtain in scene each object relative to the distance of virtual video camera, thus obtain the depth range [D of all objects model in scene 1, D 2];
(2) at depth range [D 1, D 2] in, determine the object model represented needed in scene depth range [D ' 1, D ' 2], according to described depth range [D ' 1, D ' 2] calculate distance between axles between theoretic two virtual video cameras, according to this distance between axles calculate depth range suitable in scene [D " 1, D " 2], if [D ' 1, D ' 2] be not in depth range [D " 1, D " 2] within, then readjust distance between axles between two virtual video cameras until [D ' 1, D ' 2] be in depth range [D " 1, D " 2] in till; According to the stereoscopic sensation of object model in the required scene represented, the thickness of adjustment object model.
Further, the detailed process of step of the present invention (2) is:
First, at depth range [D 1, D 2] in, the depth range of the scene represented needed for determining [D ' 1, D ' 2], according to described depth range [D ' 1, D ' 2], utilize formula (1) to calculate distance between axles B between theoretic two virtual video cameras,
B=P/(F×(1/n-1/m)) (1)
P=2×0.0174×width
Wherein, F is the focal length of virtual video camera, n=D ' 1, m=D ' 2, width is virtual film size set on virtual video camera, and unit is mm;
Then, according to the convergence identity distance of the current setting distance L from virtual video camera, according to formula (6) and (7) calculate depth range [D " 1, D " 2], if [D ' 1, D ' 2] be not in depth range [D " 1, D " 2] within, then readjust distance between axles B between two virtual video cameras until [D ' 1, D ' 2] be in depth range [D " 1, D " 2] in till;
D“ 1=L/(1+(2.4W×C+E)×L/(2W×F×B)) (6)
D“ 2=L/(1-(2.4W×C+E)×L/(2W×F×B)) (7)
Wherein, C=0.0174, W are projection screen width, and E is the distance between people two;
Finally, according to formula (8), calculate scene meta in depth value be f place, thickness is the stereoscopic sensation Depth_angle of the object model of depth, if the stereoscopic sensation of object model is inadequate, then this object model shifted near virtual video camera or increase distance between axles B, vice versa, wherein increase or reduce distance between axles B time need ensure depth range [D ' 1, D ' 2] be in depth range [D " 1, D " 2] in;
Depth_angle=1.67×F×B×depth/((f+depth)×f) (8)
Further, the detailed process of the 4th step of the present invention is:
According to the distance d of plane material distance virtual video camera, if the width of final rendering image is Size pixel, then plane material displacement sd of imaging in two virtual video cameras is
sd=Size×z×(d-L)/(2d)
z=F×B/L
Wherein, the unit of sd is pixel;
To imaging to the left translation sd the pixel of plane material in the virtual video camera of left side, if sd is less than 0, then translation to the right | sd| pixel; To imaging to the right translation sd the pixel of plane material in the virtual video camera of right side, if sd is less than 0, then translation to the left | sd| pixel.
Beneficial effect:
The first, the distance between axles that the present invention can determine between suitable virtual video camera according to the required scene depth scope represented, ensure spectators watch comfortable while show maximum stereoeffect.
The second, the present invention is by carrying out translation adjustment to the imaging of plane material on two virtual video cameras, and three-dimensional film manufacturing process midplane material has the effect of the depth of field.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the dynamic chart pasting method that the present invention is based on projection.
Fig. 2 is left and right sides virtual video camera parallax type of the present invention and imaging translation schematic diagram.
A) object is when rear, parallax free face, forms positive parallax, and wherein the imaging of same object in left side camera has translation to the left, and the imaging in right camera has translation to the right;
B) object is in face of parallax free during side, and form negative parallax, wherein the imaging of same object in left side camera has translation to the right, and the imaging in right camera has translation to the left;
C), when object is on parallax free face, form parallax free, wherein the imaging of same object in left side and right camera does not have translation.
Embodiment
For making the object of the application, technical scheme and advantage clearly, below in conjunction with the application's specific embodiment and corresponding accompanying drawing, technical scheme is clearly and completely described.Obviously, described embodiment is only some embodiments of the present application, instead of whole embodiments.Based on the embodiment in the application, those of ordinary skill in the art are not making the every other embodiment obtained under creative work prerequisite, all belong to the scope of the application's protection.
Below in conjunction with accompanying drawing, describe the technical scheme that each embodiment of the application provides in detail.
As shown in Figure 1, a kind of dynamic chart pasting method based on projection, detailed process is:
The first step, is mapped to the uv coordinate in dynamic pinup picture by each summit on object model in scene.
In the present embodiment, because need to produce the effect that time brings great changes to the world, there is larger deformation effect in the background model such as mountain and cloud.Project on the model of object to ensure that dynamic pinup picture is correct all the time, for each summit on object model, obtain its coordinate in camera coordinate system, be transformed into video camera rule through perspective transform (Perspective Projection) and observe in body (Canonical View Volume) space.Within this space, for being positioned at [-1,1] 3 cube in summit, then think that it at virtual video camera within sweep of the eye, therefore perspective division is utilized to obtain normalized device coordinate (Normalized DeviceCoordinates), using this coordinate as the uv coordinate in this vertex correspondence pinup picture; For the summit not in the cube of [-1,1] 3, then think that it is not at virtual video camera within sweep of the eye, does not therefore arrange uv coordinate.The manual uv that this step can be carried out in traditional fabrication flow process automatically maps.
Second step, draws dynamic pinup picture, is projected in scene on object model by drawn dynamic pinup picture, thus the color in scene on object model is changed in time, realizes the effect of animation teacher setting.
Set because object model in scene converts in time, according to the mapping relations that step one is determined, namely can be the dynamic pinup picture that modeling rendering goes out every frame, then dynamic for every frame pinup picture is projected on the object model in scene, realizes scene and change in time.
Because relate on the model that projects in scene, so at 2 D animation software, when drawing in such as Flash Software for producing, the profile along model is needed outwards suitably to exceed part.
3rd step, according to the required scene depth scope represented, sets the distance between axles between two virtual video cameras, guarantees that in scene, all objects are in suitable depth range, and adjust the stereoscopic sensation of object.
(1) first render the degree of depth pinup picture depth map of scene, namely obtain each object in scene and, relative to the distance of virtual video camera, obtain the depth range [D of all objects model in scene 1, D 2].
(2) at depth range [D 1, D 2] in, in the scene represented needed for determining object model depth range [D ' 1, D ' 2]; According to described depth range [D ' 1, D ' 2] calculate distance between axles between theoretic two virtual video cameras; When the distance between axles on two virtual video cameras is determined, corresponding to it spectators can not be allowed to feel dizzy depth range is determined, for not having sense of discomfort when guaranteeing spectators' viewing, according to theoretic distance between axles calculate depth range suitable in scene [D " 1, D " 2], if [D ' 1, D ' 2] be not in depth range [D " 1, D " 2] within, then the depth range illustrating in scene residing for object model has exceeded the depth range corresponding to the comfortable perception of spectators, now need to readjust distance between axles B between two virtual video cameras until [D ' 1, D ' 2] be in depth range [D " 1, D " 2] in till, according to the stereoscopic sensation of object model in required scene, adjustment object model thickness.
The concrete computation process of this step (2) is:
First, at depth range [D 1, D 2] in, the depth range of the scene represented needed for determining [D ' 1, D ' 2], according to described depth range [D ' 1, D ' 2], utilize formula (1) to calculate distance between axles B between theoretic two virtual video cameras, then adjust according to scene requirement, if increase B, then the stereoscopic sensation of current scene is stronger, if reduce B, then the stereoscopic sensation of current scene weakens.
B=P/(F×(1/n-1/m)) (1)
P=2×0.0174×width
Wherein, F is the focal length of virtual video camera, n=D ' 1, m=D ' 2, width is virtual film size set on virtual video camera, and unit is mm;
Then, according to the convergence identity distance of the current setting distance L from virtual video camera, according to formula (6) and (7) calculate depth range [D " 1, D " 2], if [D ' 1, D ' 2] be not in depth range [D " 1, D " 2] within, then readjust distance between axles B between two virtual video cameras until [D ' 1, D ' 2] be in depth range [D " 1, D " 2] in till, can ensure that spectators do not exist sense of discomfort in the process of viewing like this.
D“ 1=L/(1+(2.4W×C+E)×L/(2W×F×B)) (6)
D“ 2=L/(1-(2.4W×C+E)×L/(2W×F×B)) (7)
Wherein, C=0.0174, W are projection screen width, and E is the distance between people two.
Finally, according to formula (8), calculate scene meta in depth value be f place, thickness is the stereoscopic sensation Depth_angle of the object model of depth, if the stereoscopic sensation of object model is inadequate, this object model can be shifted near virtual video camera or increase distance between axles B, vice versa, wherein increase or reduce distance between axles B time need ensure depth range [D ' 1, D ' 2] be in depth range [D " 1, D " 2] in;
Depth_angle=1.67×F×B×depth/((f+depth)×f) (8)
4th step, for scene midplane material, does shift transformation to its imaging in two virtual video cameras respectively, makes plane material have the depth of field.
According to the distance d of plane material distance virtual video camera, converging the front and back of identity distance from the distance L of virtual video camera, if the width of final rendering image is Size pixel, then plane material displacement sd of imaging in two virtual video cameras is
sd=Size×z×(d-L)/(2d)
z=F×B/L
Wherein, the unit of sd is pixel;
As shown in Figure 2, to imaging to the left translation sd the pixel of plane material in the virtual video camera of left side, if sd is less than 0, then translation to the right | sd| pixel; To this plane material imaging in the virtual video camera of right side translation sd pixel to the right, if sd is less than 0, then translation to the left | sd| pixel.
Below the derivation of formula (1)-(8) is briefly described:
According to camera lens and the seat in the plane of animation director setting, and scene want the depth range of the object represented, namely nearest object is at distance video camera n place, and object is farthest at distance video camera m place, then the theoretical distance between axles B meeting two virtual video cameras of above requirement under virtual video camera model is
B=P/(F×(1/n-1/m)) (1)
P=2×0.0174×width
Wherein, F is the focal length of virtual video camera, is generally F=28.9mm, and width is virtual film size set on virtual video camera, and unit is mm, for common 35mm cartridge film P=1.2.
If parallax free face (convergence face) is L to the distance of video camera, then relative distance (value before the converging adjustment) z of the some imaging in right and left eyes on parallax free face is
z=F×B/L; (2)
If the object before being positioned at convergence face in depth range set in scene is dn to the distance of virtual video camera; If the object after being positioned at convergence face in depth range set in scene is df to the distance of virtual video camera; In d-making instrument, on virtual video camera, the length of virtual backsheet is unit 1; Relative distance (value after converging face adjustment) Delta_n and Delta_f of the imaging point of the object before and after convergence face on two virtual video camera egative films is respectively:
Delta_n=z×(L-dn)/dn (3)
Delta_f=z×(df-L)/df; (4)
If the relative distance of the imaging point of object on two virtual video camera egative films (value after converging adjustment) is delta, the imaging point of this object is finally projected onto on the giant-screen that width is W rice, then the right and left eyes parallax radian that final first row spectators see is:
Wherein, E is the distance between people two to delta_a=(2 × delta × W-E)/(1.2 × W) (5), and be grown up as 0.065m, children are about 0.055m.According to the usual setting of cinema, the position of first row is usually in distance screen 0.6W distance.
Because the eyes of people carry out judgment object distance by converging sight line, after in right and left eyes, the differential seat angle of image objects is greater than 2 degree, the separation of sight line can be caused and non-aggregate, that namely causes on sense organ is uncomfortable, so delta_a should be less than 2 degree, radian value C=π/180 once, approximate 0.01745.So delta_a<=2C.
Formula (3) and (4) are substituted into formula (5) as delta, can calculate and be respectively by avoiding sight line to be separated receptible object minimum distance Sn and maximum distance Sf:
Sn=L/(1+(2.4W×C+E)×L/(2W×F×B)) (6)
Sf=L/ (1-(2.4W × C+E) × L/ (2W × F × B)) (7) wherein, D " 1=Sn, D " 2the unit of=Sf, Sn and Sf is the unit set in three-dimensional animation production instrument, is generally cm, also can be m etc.As the distance d of object distance virtual video camera, time not in [Sn, Sf] scope, then the differential seat angle of the imaging of object right and left eyes can cause the separation of people's sight line, thus causes uncomfortable.
Finally, if object is at distance virtual video camera distance f place, thickness is depth, f is at [L, Sf] in scope (because at [Sn, L] object in scope has larger stereoscopic sensation and differential seat angle certainly), utilize formula (4), then the difference (i.e. stereoscopic sensation) putting the parallax radian finally presented before and after this object is:
Depth_angle=1.67×F×B×depth/((f+depth)×f) (8)
Because the angle of minimum resolution of human eye is 1 jiao point, approximate 0.000291, if so the timesharing of Depth_angle<1 angle, then object can seem do not have depth information, be a plane, as can be seen here, object distance video camera is far away or distance between axles is less, then the difference of parallax radian is less, so can represent the stereoscopic sensation of object with Depth_angle.

Claims (4)

1., based on a dynamic chart pasting method for projection, it is characterized in that, detailed process is:
The first step, is mapped to the uv coordinate in dynamic pinup picture by each summit on object model in scene;
Second step, draws dynamic pinup picture, to be projected to by drawn dynamic pinup picture in scene on object model;
3rd step, according to the required scene depth scope represented, sets the distance between axles B between two virtual video cameras, guarantees that in scene, all objects are in suitable depth range, and adjust the stereoscopic sensation of object;
Described suitable depth range be [D " 1, D " 2]:
D“ 1=L/(1+(2.4W×C+E)×L/(2W×F×B)) (6)
D“ 2=L/(1-(2.4W×C+E)×L/(2W×F×B)) (7)
Wherein, C=0.0174, W are projection screen width, and E is the distance between people two, and F is the focal length of virtual video camera, and L converges the distance of identity distance from virtual video camera;
4th step, for scene midplane material, does shift transformation to its imaging in two virtual video cameras respectively, makes plane material have the depth of field.
2., according to claim 1 based on the dynamic chart pasting method of projection, it is characterized in that, the detailed process of described 3rd step is:
(1) first render the degree of depth pinup picture depth map of scene, obtain the depth range [D of all objects model in scene 1, D 2];
(2) at depth range [D 1, D 2] in, determine the object model represented needed in scene depth range [D ' 1, D ' 2], according to described depth range [D ' 1, D ' 2] calculate distance between axles between theoretic two virtual video cameras, according to this distance between axles calculate depth range suitable in scene [D " 1, D " 2], if [D ' 1, D ' 2] be not in depth range [D " 1, D " 2] within, then readjust distance between axles between two virtual video cameras until [D ' 1, D ' 2] be in depth range [D " 1, D " 2] in till; According to the stereoscopic sensation of object model in the required scene represented, the thickness of adjustment object model.
3., according to claim 2 based on the dynamic chart pasting method of projection, it is characterized in that, the detailed process of described step (2) is:
First, at depth range [D 1, D 2] in, the depth range of the scene represented needed for determining [D ' 1, D ' 2], according to described depth range [D ' 1, D ' 2], utilize formula (1) to calculate distance between axles B between theoretic two virtual video cameras,
B=P/(F×(1/n-1/m)) (1)
P=2×0.0174×width
Wherein, F is the focal length of virtual video camera, n=D ' 1, m=D ' 2, width is virtual film size set on virtual video camera;
Then, according to the convergence identity distance of the current setting distance L from virtual video camera, according to formula (6) and (7) calculate depth range [D " 1, D " 2], if [D ' 1, D ' 2] be not in depth range [D " 1, D " 2] within, then readjust distance between axles B between two virtual video cameras until [D ' 1, D ' 2] be in depth range [D " 1, D " 2] in till;
D“ 1=L/(1+(2.4W×C+E)×L/(2W×F×B)) (6)
D“ 2=L/(1-(2.4W×C+E)×L/(2W×F×B)) (7)
Wherein, C=0.0174, W are projection screen width, and E is the distance between people two;
Finally, according to formula (8), calculate scene meta in depth value be f place, thickness is the stereoscopic sensation Depth_angle of the object model of depth, if the stereoscopic sensation of object model is inadequate, then this object model shifted near virtual video camera or increase distance between axles B, vice versa, wherein increase or reduce distance between axles B time need ensure depth range [D ' 1, D ' 2] be in depth range [D " 1, D " 2] in;
Depth_angle=1.67×F×B×depth/((f+depth)×f) (8)。
4., according to claim 1 based on the dynamic chart pasting method of projection, it is characterized in that, the detailed process of described 4th step is:
According to the distance d of plane material distance virtual video camera, if the width of final rendering image is Size pixel, then plane material displacement sd of imaging in two virtual video cameras is
sd=Size×z×(d-L)/(2d)
z=F×B/L
Wherein, the unit of sd is pixel;
To imaging to the left translation sd the pixel of plane material in the virtual video camera of left side, if sd is less than 0, then translation to the right | sd| pixel; To imaging to the right translation sd the pixel of plane material in the virtual video camera of right side, if sd is less than 0, then translation to the left | sd| pixel.
CN201510062058.3A 2015-02-05 2015-02-05 A kind of dynamic chart pasting method based on projection Active CN104599308B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510062058.3A CN104599308B (en) 2015-02-05 2015-02-05 A kind of dynamic chart pasting method based on projection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510062058.3A CN104599308B (en) 2015-02-05 2015-02-05 A kind of dynamic chart pasting method based on projection

Publications (2)

Publication Number Publication Date
CN104599308A true CN104599308A (en) 2015-05-06
CN104599308B CN104599308B (en) 2017-11-21

Family

ID=53125058

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510062058.3A Active CN104599308B (en) 2015-02-05 2015-02-05 A kind of dynamic chart pasting method based on projection

Country Status (1)

Country Link
CN (1) CN104599308B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105488801A (en) * 2015-12-01 2016-04-13 深圳华强数码电影有限公司 Method and system for combining real shooting of full dome film with three-dimensional virtual scene
CN105574914A (en) * 2015-12-18 2016-05-11 深圳市沃优文化有限公司 Manufacturing device and manufacturing method of 3D dynamic scene
CN106227417A (en) * 2015-09-01 2016-12-14 深圳创锐思科技有限公司 A kind of three-dimensional user interface exchange method, device, display box and system thereof
CN106780677A (en) * 2016-12-15 2017-05-31 南京偶酷软件有限公司 The method that three-dimensional animation visual effect is simulated by camera motion background layered shaping
CN107481305A (en) * 2017-08-18 2017-12-15 苏州歌者网络科技有限公司 Game movie preparation method
CN113706674A (en) * 2021-07-30 2021-11-26 北京原力棱镜科技有限公司 Method and device for manufacturing model map, storage medium and computer equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100110069A1 (en) * 2008-10-31 2010-05-06 Sharp Laboratories Of America, Inc. System for rendering virtual see-through scenes
CN102997891A (en) * 2012-11-16 2013-03-27 上海光亮光电科技有限公司 Device and method for measuring scene depth

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100110069A1 (en) * 2008-10-31 2010-05-06 Sharp Laboratories Of America, Inc. System for rendering virtual see-through scenes
CN102997891A (en) * 2012-11-16 2013-03-27 上海光亮光电科技有限公司 Device and method for measuring scene depth

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
付浩生等: "基于动态纹理和投影贴图技术的水刻蚀效果仿真", 《计算机技术与发展》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106227417A (en) * 2015-09-01 2016-12-14 深圳创锐思科技有限公司 A kind of three-dimensional user interface exchange method, device, display box and system thereof
CN106227417B (en) * 2015-09-01 2018-01-30 深圳创锐思科技有限公司 A kind of three-dimensional user interface exchange method, device, display box and its system
CN105488801A (en) * 2015-12-01 2016-04-13 深圳华强数码电影有限公司 Method and system for combining real shooting of full dome film with three-dimensional virtual scene
CN105488801B (en) * 2015-12-01 2019-02-15 深圳华强数码电影有限公司 The method and system that spherical screen stereoscopic film real scene shooting and three-dimensional virtual scene combine
CN105574914A (en) * 2015-12-18 2016-05-11 深圳市沃优文化有限公司 Manufacturing device and manufacturing method of 3D dynamic scene
CN105574914B (en) * 2015-12-18 2018-11-30 深圳市沃优文化有限公司 The producing device and preparation method thereof of 3D dynamic scene
CN106780677A (en) * 2016-12-15 2017-05-31 南京偶酷软件有限公司 The method that three-dimensional animation visual effect is simulated by camera motion background layered shaping
CN106780677B (en) * 2016-12-15 2020-01-10 南京偶酷软件有限公司 Method for simulating three-dimensional animation visual effect through lens motion background layering processing
CN107481305A (en) * 2017-08-18 2017-12-15 苏州歌者网络科技有限公司 Game movie preparation method
CN113706674A (en) * 2021-07-30 2021-11-26 北京原力棱镜科技有限公司 Method and device for manufacturing model map, storage medium and computer equipment
CN113706674B (en) * 2021-07-30 2023-11-24 北京原力棱镜科技有限公司 Method and device for manufacturing model map, storage medium and computer equipment

Also Published As

Publication number Publication date
CN104599308B (en) 2017-11-21

Similar Documents

Publication Publication Date Title
US8711204B2 (en) Stereoscopic editing for video production, post-production and display adaptation
CN104599308A (en) Projection-based dynamic mapping method
US9445072B2 (en) Synthesizing views based on image domain warping
US7983477B2 (en) Method and apparatus for generating a stereoscopic image
WO2019041351A1 (en) Real-time aliasing rendering method for 3d vr video and virtual three-dimensional scene
JP2019079552A (en) Improvements in and relating to image making
US20080049100A1 (en) Algorithmic interaxial reduction
AU2018249563B2 (en) System, method and software for producing virtual three dimensional images that appear to project forward of or above an electronic display
US6388666B1 (en) System and method for generating stereoscopic image data
WO2017128887A1 (en) Method and system for corrected 3d display of panoramic image and device
CN105141941A (en) Digital panoramic 3D film production method and system
CN102929091A (en) Method for manufacturing digital spherical curtain three-dimensional film
EP3057316B1 (en) Generation of three-dimensional imagery to supplement existing content
JP4996922B2 (en) 3D visualization
WO2024002023A1 (en) Method and apparatus for generating panoramic stereoscopic image, and electronic device
US10110876B1 (en) System and method for displaying images in 3-D stereo
CN109151273B (en) Fan stereo camera and stereo measurement method
Benzeroual et al. Distortions of space in stereoscopic 3d content
CN103400339A (en) Manufacturing method of 3D (three-dimensional) ground stickers
CN218886314U (en) AR glasses light path reflection projection arrangement
TW201926256A (en) Building VR environment from movie
US8952958B1 (en) Stereoscopic computer-animation techniques based on perceptual constraints
NZ757902B2 (en) System, method and software for producing virtual three dimensional images that appear to project forward of or above an electronic display
CN104503199A (en) Technique for photographing and fabricating children&#39;s 3D picture book
WO2018233387A1 (en) Naked-eye 3d display method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20160118

Address after: Hangzhou City, Zhejiang province 310053 Binjiang District Jianye Road No. 511 building 10 room 1008, China

Applicant after: Zhejiang Fanju Technology Co., Ltd.

Address before: 100083 No. 95 East Zhongguancun Road, Beijing, Haidian District

Applicant before: Beijing section skill has appearance science and technology limited Company

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20170807

Address after: 100041, room 2, building 3, building 30, Xing Xing street, Shijingshan District, Beijing,

Applicant after: Beijing section skill has appearance science and technology limited Company

Address before: Hangzhou City, Zhejiang province 310053 Binjiang District Jianye Road No. 511 building 10 room 1008, China

Applicant before: Zhejiang Fanju Technology Co., Ltd.

GR01 Patent grant
GR01 Patent grant