CN118154760A - Real-time interaction method and system based on three-dimensional engine on model surface - Google Patents

Real-time interaction method and system based on three-dimensional engine on model surface Download PDF

Info

Publication number
CN118154760A
CN118154760A CN202410571296.6A CN202410571296A CN118154760A CN 118154760 A CN118154760 A CN 118154760A CN 202410571296 A CN202410571296 A CN 202410571296A CN 118154760 A CN118154760 A CN 118154760A
Authority
CN
China
Prior art keywords
dimensional
model
real
plane
engine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410571296.6A
Other languages
Chinese (zh)
Other versions
CN118154760B (en
Inventor
杨斌
钟伟
王春博
柳晓坤
柳紫涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Jerei Digital Technology Co Ltd
Original Assignee
Shandong Jerei Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Jerei Digital Technology Co Ltd filed Critical Shandong Jerei Digital Technology Co Ltd
Priority to CN202410571296.6A priority Critical patent/CN118154760B/en
Priority claimed from CN202410571296.6A external-priority patent/CN118154760B/en
Publication of CN118154760A publication Critical patent/CN118154760A/en
Application granted granted Critical
Publication of CN118154760B publication Critical patent/CN118154760B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention relates to the technical field of interaction with users in a three-dimensional engine, in particular to a real-time interaction method and a real-time interaction system based on the three-dimensional engine on a model surface. A real-time interaction method based on a three-dimensional engine on a model surface comprises the following steps: acquiring a plane UI; drawing a planar UI to a map based on RenderTexture techniques; rendering the rendered map to the model by a shader; the script is written to translate the coordinates and behavior of the user operating on the model surface onto the user plane UI. The method can create more immersive sense for the user, and the participation sense and the immersive experience of the user are enhanced through the combination of the bending and the structure of the model surface and the surrounding environment of the user.

Description

Real-time interaction method and system based on three-dimensional engine on model surface
Technical Field
The invention relates to the technical field of interaction with users in a three-dimensional engine, in particular to a real-time interaction method and a real-time interaction system based on the three-dimensional engine on a model surface.
Background
In the prior art, various User Interfaces (UIs) for interacting with users can only realize the interaction with the users on a plane, which brings about some problems and limitations. First, the planar UI cannot provide the same sense of immersion as the curved UI. Since the planar UI exists only in two dimensions, it cannot simulate real world stereoscopic and depth perception through shapes and curves. This may result in the user feeling more planar and unrealistic when using a planar UI.
Meanwhile, the main presentation space of the planar UI is the plane of the screen, and the user's field of view is limited by the screen size and resolution. With respect to curved surface UIs, flat UIs may not display a large range of content, requiring a user to view more information by scrolling or toggling through the page. This may reduce the efficiency and convenience of the user.
In addition, planar UIs typically have a fixed layout and a more compact design, lacking the dynamic and hierarchical visual effects that curved UIs possess. This may make the interface appear relatively single and flat, which is difficult to attract the attention of the user and increase the visual appeal.
Therefore, a real-time interaction method and system based on a three-dimensional engine on a model surface are needed.
Disclosure of Invention
In order to solve the above-mentioned problems, the present invention provides a real-time interaction method and system based on a three-dimensional engine on a model surface.
In a first aspect, the present invention provides a real-time interaction method based on a three-dimensional engine on a model surface, which adopts the following technical scheme:
a real-time interaction method based on a three-dimensional engine on a model surface comprises the following steps:
acquiring a plane UI;
Drawing a planar UI to a map based on RenderTexture techniques;
Rendering the rendered map to the model by a shader;
The script is written to translate the coordinates and behavior of the user operating on the model surface onto the user plane UI.
Further, the acquiring the planar UI includes creating a UI camera in the three-dimensional graphics engine, and rendering the planar UI using the UI camera.
Further, the RenderTexture-based technique draws the planar UI to a map, including setting the planar UI as a rendering target for the UI camera, simultaneously creating RenderTexture maps in the three-dimensional graphics engine, setting the resolution of the RenderTexture maps to be consistent with the rendering target resolution for the UI camera, creating a texture sphere in the three-dimensional graphics engine, and assigning RenderTexture maps to the UI camera target texture and the texture sphere main texture, respectively.
Further, the rendering of the rendered map onto the model through the shader includes building a three-dimensional UI model, importing the three-dimensional UI model into a three-dimensional engine, endowing the three-dimensional engine with material balls, and adding a Mesh collision box for the three-dimensional UI model.
Further, the script writing is used for converting coordinates and behaviors of a user operating on the surface of the model onto a user plane UI, the script writing is used for performing collision detection on the surface of a Mesh collision box by using rays, when the rays intersect with the Mesh collision box, intersection point coordinates are obtained, the intersection point coordinates are converted into UV coordinate points corresponding to the three-dimensional UI model, the UV coordinate points of the three-dimensional UI model are converted into coordinate points on a UI canvas, the operation behaviors of the user are converted into operations on corresponding UI components according to the coordinate points on the UI canvas, and after the user operation is completed, the UI is re-rendered onto a corresponding map.
Further, the acquiring the intersection point coordinates, converting the intersection point coordinates into UV coordinate points corresponding to the three-dimensional UI model, and positioning the position points of the rays on the surface of the three-dimensional UI model by a triangular surface positioning method, wherein the intersection points of the rays and the surface of the three-dimensional UI model are obtained by a collision box, each triangular surface in a Mesh triangular surface array of the three-dimensional UI model is traversed, the triangular surface where the intersection points are located is positioned, the UV position of the triangular surface is obtained by a triangular surface angle mark, and the coordinate positions of the same proportion of the UV coordinates in the plane UI are calculated by UV.
Further, the converting the operation behavior of the user into the operation on the corresponding UI component includes calling the operation of the user on the corresponding coordinates of the three-dimensional UI model into the three-dimensional engine through the event, mapping the operation of the user onto the corresponding plane UI, rendering the plane UI in real time through the camera and drawing the plane UI onto the map, wherein the camera rendering picture is obtained through the three-dimensional engine internal method, each pixel color is obtained, and the pixel colors are drawn onto the new map one by one and are rendered onto the model surface through the shader.
In a second aspect, a real-time interaction system based on a three-dimensional engine on a model surface includes:
A data acquisition module configured to acquire a planar UI;
a rendering module configured to render the planar UI to a map based on RenderTexture technologies;
A rendering module configured to render the rendered map onto the model through the shader;
And the conversion module is configured to write a script to convert coordinates and behaviors of the user operating on the model surface onto the user plane UI.
In a third aspect, the present invention provides a computer readable storage medium having stored therein a plurality of instructions adapted to be loaded and executed by a processor of a terminal device for said real-time interaction method based on a three-dimensional engine on a model surface.
In a fourth aspect, the present invention provides a terminal device, including a processor and a computer readable storage medium, where the processor is configured to implement instructions; the computer readable storage medium is for storing a plurality of instructions adapted to be loaded by a processor and to perform the method of real-time interaction on a model surface based on a three-dimensional engine.
In summary, the invention has the following beneficial technical effects:
1. The method can create more immersive sense for the user, and the participation sense and the immersive experience of the user are enhanced through the combination of the bending and the structure of the model surface and the surrounding environment of the user.
2. The special UI appearance and shape of the present method is more visually attractive and unique than conventional planar UIs. The interface may be made more dynamic and attractive by using curves, arcs, and a third dimension to attract the user's gaze.
3. The method also uses the plane UI as the interaction bottom layer, has better compatibility, can conveniently modify the existing UI form, and does not need to be rewritten for a plurality of interactable components.
Drawings
Fig. 1 is a schematic diagram of a real-time interaction method based on a three-dimensional engine on a model surface according to embodiment 1 of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Example 1
Referring to fig. 1, the embodiment provides a real-time interaction method based on a three-dimensional engine on a model surface, wherein the three-dimensional engine is Unity3D, and the language is c# language, and the method comprises the following steps:
Step 1: firstly, acquiring a plane UI, and creating a camera in a three-dimensional graphic engine; the planar UI is rendered using a camera and then rendered to a map by RenderTexture techniques.
In the step 1, a UI camera is newly built in the scene, and parameters thereof are set.
Specifically, a UI camera rendering level is set as a UI layer, a camera target rendering texture is set as RenderTexture mapping, a camera clipping distance is adjusted, and a camera background rendering color is set.
A planar UI is created and set as the rendering target for the UI camera. Setting a rendering mode of a canvas component of the UI as rendering for a video camera, setting the UI camera as a rendering camera, and adjusting a camera rendering plane distance.
At the same time, a RenderTexture map is created in the three-dimensional engine and its resolution is set to be consistent with the rendering target resolution of the UI camera.
Next, a texture ball is created in the three-dimensional engine and the RenderTexture map is given its main texture. Modifying the rendering mode of the UI to be camera-specific ensures that the UI camera renders the UI elements correctly. A mask layer is added to the UI camera so that it can only see UI objects. And finally, respectively endowing the RenderTexture map with the target texture of the UI camera and the main texture of the material ball on the model surface, dragging the target texture and the main texture to the material ball parameters in a three-dimensional engine, and realizing the real-time projection of the UI rendering result on the model surface and the rendering of the UI on the model surface.
Step 2: rendering the rendered map to the model by a shader;
In the step 2, a three-dimensional UI model is established, the three-dimensional UI model is imported into a three-dimensional engine, the material balls newly established in the step 1 are endowed to the model, and a Mesh collision box is added for the three-dimensional UI model. In particular, the method comprises the steps of,
Setting three-dimensional space coordinates according to a required model shape, storing the three-dimensional space coordinates into an array, linking adjacent three-dimensional coordinates to form triangular surfaces according to the requirement, storing subscripts of the three-dimensional coordinates forming the triangular surfaces in data into a new array to form triangular surface array information of the model, forming the surface of the model by a plurality of triangular surfaces, setting a plane coordinate system of each point from the lower left corner (0, 0) to the upper right corner (1, 1) of a picture position by UV coordinates of vertexes, finding out data such as colors corresponding to the vertexes, and 'pasting' texture chartlets onto the surface of the model in such a way, so that the chartlets can be tiled onto the whole model surface.
And acquiring a triangular surface of the model by using a Mesh collision box, and then positioning the position point of the ray on the surface of the model by a triangular surface positioning method. The Mesh collision box is a component of the three-dimensional engine and can be directly added, the model Mesh contains a triangular face array, and the Mesh collision box can be directly obtained.
Step 3: the script is written to translate the coordinates and behavior of the user operating on the model surface onto the user plane UI.
In the step 3, through correctly mounting the script and configuring the proper camera, variable and attribute parameters, the method can map the operation of the user on the specific model to the plane UI and render the UI picture to the corresponding model surface in real time.
Specifically, the mounting script detects in real time that when a user operates, the user operation type and operation coordinates are obtained through the three-dimensional engine input system, then rays are sent to the surface of the model from the user operation coordinates, the junction points of the rays and the surface of the model are obtained through the collision box, each triangular surface in the Mesh triangular surface array of the model is traversed, and the junction points are positioned in which triangular surface.
The positioning method comprises the following steps: knowing that the boundary point is P, three points of the triangular surface are a, b and c respectively, vectors are Pa, pb and Pc, pa b of Pa-b cross-over Pb is calculated, pb pbc of Pb cross-over Pc, pc of Pc cross-over Pa is calculated, then the Pa b point-over pbc is calculated, the Pa point times pca, pbc points times pca, obtain d1, d2, d3, if the three numbers are the same sign and positive or negative, then it indicates that P is on the same side of each side of the triangle, then it indicates that the point is inside the triangle.
Acquiring a triangular surface UV position through a triangular surface angle mark after acquiring a corresponding triangular surface, calculating the coordinate position of the UV coordinate in the plane UI according to the UV, and then transmitting a second ray to the coordinate position of the plane UI, and acquiring the intersection point position of the ray on the plane UI when the ray intersects with the plane UI, wherein the intersection point position acquisition method comprises the following steps: vdot =point multiplier (ray direction, plane UI normal) -nfoy = (ray emission point, plane UI normal) -distance of ray emission point to plane UI if vdot is equal to 0, the point is not in the plane, if vdot is not equal to 0, ndot/vdot is greater than 0, the ray is represented in the plane, and then the intersection coordinate space position of the ray and the plane UI is obtained by the ray start point coordinate+ray direction x ray distance.
As a further embodiment of the method of the present invention,
A point P is determined by mathematical operations to be located within a given triangle ABC, and the location of the intersection of the ray from that point and through the plane in which the triangle lies and that plane is further determined.
First, the step of judging whether the point P is inside the triangle ABC is as follows:
the calculation vector pa=p-a, pb=p-b, pc=p-c, which represents the vector of point P to triangle vertices a, b, c.
Calculating the outer product of the two vectors yields a vector perpendicular to the triangle edge: pa=pa×pb, pbc =pb×pc, pca=pc×pa.
The dot product of these three vertical vectors and the next vertical vector is then calculated: d1 =pab· pbc, d2=pab·pca, d3= pbc ·pca.
If d1, d2, d3 are identical (both positive or both negative), the description point P is located inside triangle ABC.
The following is the determination of the intersection location of the ray with the plane UI:
Let us assume that the normal vector N of the plane UI and a point O on the plane are known, as well as the direction vector D and the start point coordinates S of the ray.
First, a point multiplier vdot =d·n of the ray direction and the plane normal is calculated, which can be used to determine whether the ray is parallel to the plane or directed into the plane.
The distance from the ray origin to the plane, i.e., the point multiplication value of the line connecting the ray origin to any point on the plane (e.g., point O) and the normal divided by the modulo length of the normal, is calculated as nfoiy = (S-O) ·n/|n|.
If vdot is equal to 0, it means that the ray is parallel to the plane, with no intersection point; if vdot is not equal to 0, the intersection parameter t= nfoiy/vdot is calculated continuously.
When t > 0, it indicates that the ray does pass through the plane, and the intersection coordinate may be obtained by multiplying the origin coordinate plus the direction vector by t, i.e., the intersection coordinate i=s+d×t. This is the position of the intersection of the ray with the planar UI in three dimensions.
Finally, invoking a three-dimensional engine internal method by an event through the operation of a user on the corresponding coordinates of the model, mapping the operation of the user onto the corresponding plane UI, rendering the plane UI in real time through a camera, drawing the plane UI onto a map, and rendering the plane UI onto the surface of the model through a shader, wherein the drawing method comprises the following steps: and obtaining a camera rendering picture through a three-dimensional engine internal method, obtaining each pixel color, and drawing the colors one by one on a new map.
Example 2
The embodiment provides a real-time interaction system based on a three-dimensional engine on a model surface, which comprises the following steps:
A data acquisition module configured to acquire a planar UI;
a rendering module configured to render the planar UI to a map based on RenderTexture technologies;
A rendering module configured to render the rendered map onto the model through the shader;
And the conversion module is configured to write a script to convert coordinates and behaviors of the user operating on the model surface onto the user plane UI.
A computer readable storage medium having stored therein a plurality of instructions adapted to be loaded and executed by a processor of a terminal device for said real-time interaction method based on a three-dimensional engine on a model surface.
A terminal device comprising a processor and a computer readable storage medium, the processor configured to implement instructions; the computer readable storage medium is for storing a plurality of instructions adapted to be loaded by a processor and to perform the method of real-time interaction on a model surface based on a three-dimensional engine.
The above embodiments are not intended to limit the scope of the present invention, so: all equivalent changes in structure, shape and principle of the invention should be covered in the scope of protection of the invention.

Claims (10)

1. The real-time interaction method based on the three-dimensional engine on the surface of the model is characterized by comprising the following steps of:
acquiring a plane UI;
Drawing a planar UI to a map based on RenderTexture techniques;
Rendering the rendered map to the model by a shader;
The script is written to translate the coordinates and behavior of the user operating on the model surface onto the user plane UI.
2. The method of claim 1, wherein the capturing the planar UI comprises creating a UI camera in the three-dimensional graphics engine, and rendering the planar UI using the UI camera.
3. The real-time interaction method based on the three-dimensional engine on the model surface according to claim 2, wherein the drawing of the plane UI to the map based on RenderTexture technology comprises setting the plane UI as a rendering target of the UI camera, creating RenderTexture map in the three-dimensional graphic engine, setting the resolution of RenderTexture map to be consistent with the rendering target resolution of the UI camera, creating a texture ball in the three-dimensional graphic engine, and respectively giving RenderTexture map to the UI camera target texture and the texture ball main texture.
4. A real-time interaction method based on a three-dimensional engine on a model surface according to claim 3, wherein the rendering of the rendered map onto the model by a shader comprises building a three-dimensional UI model, importing the three-dimensional UI model into the three-dimensional engine, assigning a material ball to the three-dimensional engine, and adding a Mesh collision box to the three-dimensional UI model.
5. The real-time interaction method based on the three-dimensional engine on the model surface according to claim 4, wherein the script writing is characterized in that the script writing is used for converting coordinates and behaviors of a user operating on the model surface to a user plane UI, the script writing is used for performing collision detection on the Mesh collision box surface by using rays, when the rays intersect with the Mesh collision box, intersection point coordinates are obtained, the intersection point coordinates are converted into UV coordinate points corresponding to the three-dimensional UI model, the UV coordinate points of the three-dimensional UI model are converted into coordinate points on a UI canvas, the operation behaviors of the user are converted into operations on corresponding UI components according to the coordinate points on the UI canvas, and the UI is re-rendered to a corresponding map after the user operation is completed.
6. The real-time interaction method based on the three-dimensional engine on the model surface according to claim 5, wherein the obtaining the intersection point coordinates, converting the intersection point coordinates into UV coordinate points corresponding to the three-dimensional UI model, comprises the steps of positioning the position points of rays on the three-dimensional UI model surface through a triangular surface positioning method, obtaining the junction points of the rays and the three-dimensional UI model surface through a collision box, traversing each triangular surface in a Mesh triangular surface array of the three-dimensional UI model, positioning the triangular surface where the junction points are located, obtaining the UV position of the triangular surface through a triangular surface angle mark, and calculating the coordinate positions of the UV coordinates in the same proportion in the plane UI through UV.
7. The real-time interaction method based on the three-dimensional engine on the model surface according to claim 6, wherein the step of converting the operation behavior of the user into the operation on the corresponding UI component comprises the steps of calling the operation of the user on the corresponding coordinates of the three-dimensional UI model into the three-dimensional engine through an event, mapping the operation of the user onto the corresponding plane UI, rendering the plane UI through a camera in real time and drawing the plane UI onto a map, wherein the camera rendering picture is obtained through the internal method of the three-dimensional engine, each pixel color is obtained, and each pixel color is drawn onto a new map one by one and is rendered onto the model surface through a shader.
8. A real-time interactive system based on a three-dimensional engine on a model surface, comprising:
A data acquisition module configured to acquire a planar UI;
a rendering module configured to render the planar UI to a map based on RenderTexture technologies;
A rendering module configured to render the rendered map onto the model through the shader;
And the conversion module is configured to write a script to convert coordinates and behaviors of the user operating on the model surface onto the user plane UI.
9. A computer readable storage medium having stored therein a plurality of instructions adapted to be loaded by a processor of a terminal device and to perform a real-time interaction method on a model surface based on a three-dimensional engine as claimed in claim 1.
10. A terminal device comprising a processor and a computer readable storage medium, the processor configured to implement instructions; a computer readable storage medium for storing a plurality of instructions adapted to be loaded by a processor and to perform a real-time interaction method on a model surface based on a three-dimensional engine as claimed in claim 1.
CN202410571296.6A 2024-05-10 Real-time interaction method and system based on three-dimensional engine on model surface Active CN118154760B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410571296.6A CN118154760B (en) 2024-05-10 Real-time interaction method and system based on three-dimensional engine on model surface

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410571296.6A CN118154760B (en) 2024-05-10 Real-time interaction method and system based on three-dimensional engine on model surface

Publications (2)

Publication Number Publication Date
CN118154760A true CN118154760A (en) 2024-06-07
CN118154760B CN118154760B (en) 2024-08-02

Family

ID=

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102523473A (en) * 2011-12-01 2012-06-27 中兴通讯股份有限公司 Three-dimensional interface display device, method and terminal
KR101859318B1 (en) * 2017-01-09 2018-05-17 진형우 Video content production methods using 360 degree virtual camera
CN108958568A (en) * 2017-05-17 2018-12-07 北京暴风魔镜科技有限公司 A kind of display, exchange method and the device of three dimentional graph display mean camber UI
CN111167120A (en) * 2019-12-31 2020-05-19 网易(杭州)网络有限公司 Method and device for processing virtual model in game
CN116109737A (en) * 2023-03-07 2023-05-12 网易(杭州)网络有限公司 Animation generation method, animation generation device, computer equipment and computer readable storage medium
CN116310051A (en) * 2022-09-09 2023-06-23 武汉乐庭软件技术有限公司 Vehicle-mounted HMI two-dimensional simulated three-dimensional rendering method and system based on Unreal engine
WO2024045273A1 (en) * 2022-08-29 2024-03-07 上海智能制造功能平台有限公司 Pose estimation virtual data set generation method based on physical engine and collision entity

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102523473A (en) * 2011-12-01 2012-06-27 中兴通讯股份有限公司 Three-dimensional interface display device, method and terminal
KR101859318B1 (en) * 2017-01-09 2018-05-17 진형우 Video content production methods using 360 degree virtual camera
CN108958568A (en) * 2017-05-17 2018-12-07 北京暴风魔镜科技有限公司 A kind of display, exchange method and the device of three dimentional graph display mean camber UI
CN111167120A (en) * 2019-12-31 2020-05-19 网易(杭州)网络有限公司 Method and device for processing virtual model in game
WO2024045273A1 (en) * 2022-08-29 2024-03-07 上海智能制造功能平台有限公司 Pose estimation virtual data set generation method based on physical engine and collision entity
CN116310051A (en) * 2022-09-09 2023-06-23 武汉乐庭软件技术有限公司 Vehicle-mounted HMI two-dimensional simulated three-dimensional rendering method and system based on Unreal engine
CN116109737A (en) * 2023-03-07 2023-05-12 网易(杭州)网络有限公司 Animation generation method, animation generation device, computer equipment and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
左鲁梅, 黄心渊: "纹理映射技术在三维游戏引擎中的应用", 计算机仿真, no. 10, 30 October 2004 (2004-10-30) *

Similar Documents

Publication Publication Date Title
CN110196746B (en) Interactive interface rendering method and device, electronic equipment and storage medium
JP4845147B2 (en) Perspective editing tool for 2D images
US6677944B1 (en) Three-dimensional image generating apparatus that creates a three-dimensional model from a two-dimensional image by image processing
US5841441A (en) High-speed three-dimensional texture mapping systems and methods
JP4680796B2 (en) Image base protrusion displacement mapping method and double displacement mapping method using the method
CN102289845B (en) Three-dimensional model drawing method and device
US20080246760A1 (en) Method and apparatus for mapping texture onto 3-dimensional object model
JP3626144B2 (en) Method and program for generating 2D image of cartoon expression from 3D object data
CN111127623A (en) Model rendering method and device, storage medium and terminal
CN116051713B (en) Rendering method, electronic device, and computer-readable storage medium
CN111986303A (en) Fluid rendering method and device, storage medium and terminal equipment
CN114375464A (en) Ray tracing dynamic cells in virtual space using bounding volume representations
US7133052B1 (en) Morph map based simulated real-time rendering
US20200380770A1 (en) All-around spherical light field rendering method
CN118154760B (en) Real-time interaction method and system based on three-dimensional engine on model surface
CN116243831B (en) Virtual cloud exhibition hall interaction method and system
CN115311395A (en) Three-dimensional scene rendering method, device and equipment
CN118154760A (en) Real-time interaction method and system based on three-dimensional engine on model surface
CN116630523A (en) Improved real-time shadow rendering method based on shadow mapping algorithm
JP3278501B2 (en) Image processing apparatus and method
JP3002972B2 (en) 3D image processing device
CN115035231A (en) Shadow baking method, shadow baking device, electronic apparatus, and storage medium
CN112070904A (en) Augmented reality display method applied to museum
JP3067097B2 (en) 3D image data creation method
KR20190056694A (en) Virtual exhibition space providing method using 2.5 dimensional object tagged with digital drawing element

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant