CN115761188A - Method and system for fusing multimedia and three-dimensional scene based on WebGL - Google Patents

Method and system for fusing multimedia and three-dimensional scene based on WebGL Download PDF

Info

Publication number
CN115761188A
CN115761188A CN202211386797.4A CN202211386797A CN115761188A CN 115761188 A CN115761188 A CN 115761188A CN 202211386797 A CN202211386797 A CN 202211386797A CN 115761188 A CN115761188 A CN 115761188A
Authority
CN
China
Prior art keywords
multimedia
texture
rendering
virtual camera
webgl
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211386797.4A
Other languages
Chinese (zh)
Inventor
杨自闯
万昌富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Chuanyun Intelligent Technology Co ltd
Original Assignee
Sichuan Chuanyun Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Chuanyun Intelligent Technology Co ltd filed Critical Sichuan Chuanyun Intelligent Technology Co ltd
Priority to CN202211386797.4A priority Critical patent/CN115761188A/en
Publication of CN115761188A publication Critical patent/CN115761188A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Generation (AREA)

Abstract

The invention discloses a method and a system for fusing multimedia and three-dimensional scenes based on WebGL, wherein the method comprises the following steps: s1: processing a multimedia source to convert the multimedia source into a picture when the multimedia source is rendered by a browser, and obtaining multimedia textures by taking the picture as textures; s2: establishing a virtual camera model matched with multimedia; s3: calculating a projection matrix and a view matrix of each virtual camera in the virtual camera model to obtain a projection view matrix; s4: defining a fragment shader of WebGL, and taking the depth value of each fragment as a fragment color value to obtain depth textures; s5: and splicing the multimedia texture, the projection view matrix and the depth texture into an array, taking the array as a shader uniform variable when rendering a scene, and then performing normal rendering to obtain a fusion result of the multimedia and the three-dimensional scene. The invention can fuse the three-dimensional model and the real multimedia, and solves the technical problem of fusion of the three-dimensional virtual scene and the real scene.

Description

Method and system for fusing multimedia and three-dimensional scene based on WebGL
Technical Field
The invention relates to the technical field of video fusion, in particular to a method and a system for fusing multimedia and three-dimensional scenes based on WebGL.
Background
Multimedia is usually presented by a separate player and cannot be associated with the background where the content is played. This type of presentation presents a number of problems to the user, such as in an online video surveillance system: 1) The picture is isolated, and the global preview cannot be realized; 2) The pictures are scattered, and the spatial position cannot be determined; 3) The screen is restricted, and the observed picture is limited; 4) Cross-shot tracking is difficult and no spatiotemporal association can be established.
In the prior art, in a three-dimensional visual scene based on WebGL, characters, graphics, pictures and videos can be used as textures of a three-dimensional model, and fusion display is performed in a texture mapping mode. However, the texture maps used in this technique are typically predefined during the three-dimensional modeling process and cannot be changed at will. When a multimedia source is distributed as texture maps on a plurality of models for display, the process of making a three-dimensional model becomes very complicated. In a video monitoring scene and the like, videos which a user wants to show can be dynamically changed along with the rotation of the holder and the adjustment of the focal length, so that each state has to be modeled, and the implementation mode has very high labor cost and time cost.
Disclosure of Invention
In order to solve the above problems, the present invention aims to provide a method and a system for fusing a multimedia and a three-dimensional scene based on WebGL, so as to fuse a three-dimensional model and the multimedia, and solve the problem of fusing a three-dimensional virtual scene and a real scene.
The technical scheme of the invention is as follows:
on one hand, the method for fusing the multimedia and the three-dimensional scene based on the WebGL is provided, and comprises the following steps:
s1: processing a multimedia source to convert the multimedia source into a picture when a browser renders the multimedia source, and taking the picture as a texture to obtain a multimedia texture;
s2: establishing a virtual camera model matched with the multimedia according to the type, position and size of the multimedia display;
s3: calculating a projection matrix and a view matrix of each virtual camera in the virtual camera model, and obtaining a projection view matrix according to the projection matrix and the view matrix;
s4: defining a fragment shader of WebGL, taking the depth value of each fragment as a fragment color value, then performing scene rendering, and storing a rendering result in a color buffer to obtain depth texture;
s5: and splicing the multimedia texture, the projection view matrix and the depth texture into an array, taking the array as a shader uniform variable when rendering a scene, and then performing normal rendering to obtain a fusion result of the multimedia and the three-dimensional scene.
Preferably, in step S1, the multimedia source includes any one or more of characters, graphics, pictures and videos;
drawing characters or graphics on an HTMLCanvas element, and outputting the Canvas in a picture format to obtain multimedia textures corresponding to the characters or the graphics;
the method comprises the steps of drawing a video on an HTMLVideoElement, and obtaining a frame picture of the video when a browser renders, wherein the frame picture is a multimedia texture corresponding to the video.
Preferably, in step S2, when the virtual camera model is established, an orthogonal camera or a perspective camera is used, and virtual camera parameters are adjusted according to the type, position and size of the multimedia display, where the virtual camera parameters include a camera position, a camera sight line, an upward direction and a view volume.
Preferably, in step S4, when the depth texture is obtained, a depth value of a point on the virtual camera model in the three-dimensional scene in the clipping space is used as a fragment color value, and a floating point number of the depth value is converted into an RGBA shaped color value.
Preferably, in step S5, when performing normal rendering, the opaque object and the translucent object in the scene are rendered respectively, a rendering result one of the opaque object and a rendering result two of the translucent object are obtained, and then the rendering result one and the rendering result two are mixed to obtain a fusion result of the multimedia and the three-dimensional scene.
Preferably, the rendering steps for the opaque object and the translucent object are the same, and each comprise the following sub-steps:
s51: in a vertex shader for normal rendering, calculating coordinates of a vertex in clipping space at each virtual camera;
s52: converting the coordinates in the clipping space to points in a standard equipment space;
s53: judging whether the vertex is in a clipping space or not according to the conversion result of the point in the standard equipment space, and if not, not processing; if yes, go to step S54;
s54: comparing the vertex in the clipping space with the depth value in the depth texture array, judging whether the vertex in the clipping space is visible or not, and if not, not processing; if yes, obtaining the color value of the fragment from the multimedia texture array, and entering step S55;
s55: mixing a plurality of multimedia color values and normal colors, and keeping a rendering result in a color buffer area;
through steps S51 to S55, a rendering result one of the opaque objects and a rendering result two of the translucent objects are obtained.
Preferably, in step S51, the coordinates of the vertex in the clipping space at each virtual camera are calculated by the following formula: projection view matrix model matrix vertex coordinates.
Preferably, in step S52, the formula of the conversion is: position.xyz/position.w; where position.xyz is the x, y, z component of the coordinate and position.w is the w component of the coordinate;
the standard equipment space (x, y, z) has a range of [ -1,1], where points within this range are in clip space and points not within this range are not.
Preferably, in step S54, a color value of the point in the depth texture array is obtained by texture2D (tex) and decoded as a depth value; and judging whether the z value of the point in the standard equipment space is not larger than the depth value, if so, the fragment is visible, otherwise, the fragment is invisible.
In another aspect, a system for fusing WebGL-based multimedia and three-dimensional scenes is further provided, where the system includes:
the multimedia source processing module is used for processing the multimedia source to convert the multimedia source into a picture when the multimedia source is rendered by a browser, and the picture is taken as a texture to obtain a multimedia texture;
the virtual camera model establishing module is used for establishing a virtual camera model matched with the multimedia;
and the multimedia fusion processing module is used for acquiring the projection view matrix and the depth texture, splicing the multimedia texture, the projection view matrix and the depth texture into an array, using the array as a shader uniform variable during scene rendering, and then performing normal rendering to fuse the multimedia and the three-dimensional scene.
The invention has the beneficial effects that:
the invention can fuse the three-dimensional model and the real model, and solves the problem of fusion of the three-dimensional virtual scene and the real scene; in addition, multimedia is subjected to fusion display in a three-dimensional scene, and a display position needs to be modeled in advance; the same technology can be adopted to realize the fusion display of characters, graphs, pictures and videos in a three-dimensional scene; the display position and size can be dynamically adjusted; multiple multimedia sources of different types can be simultaneously presented in a three-dimensional scene; the multimedia source may be additionally processed and then presented in a three-dimensional field; a plurality of multimedia sources can be fused and then used as one source for displaying; three-dimensional models containing translucency can be perfectly supported.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for fusing a WebGL-based multimedia and a three-dimensional scene according to the invention;
FIG. 2 is a diagram illustrating the result of the fusion of multimedia and three-dimensional scene according to one embodiment;
FIG. 3 is a schematic diagram of an original model of another embodiment three-dimensional scene;
fig. 4 is a schematic diagram of a model after the three-dimensional scene is fused with a video according to the embodiment of fig. 3.
Detailed Description
The invention is further illustrated by the following examples in conjunction with the drawings. It should be noted that, in the present application, the embodiments and the technical features of the embodiments may be combined with each other without conflict. It is noted that, unless otherwise indicated, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The use of the terms "comprising" or "including" and the like in the present disclosure is intended to mean that the elements or items listed before the term cover the elements or items listed after the term and their equivalents, but not to exclude other elements or items.
As shown in fig. 1, the present invention provides a method for fusing a multimedia and a three-dimensional scene based on WebGL, comprising the following steps:
s1: and processing the multimedia source to convert the multimedia source into a picture when the multimedia source is rendered by a browser, and obtaining the multimedia texture by taking the picture as the texture.
In a specific embodiment, the multimedia source includes any one or more of text, graphics, pictures, and videos; so long as the multimedia source can ultimately be converted into a texture map is applicable to the present invention.
Optionally, the text or the graphics are drawn on an htmlcanvas element, and the Canvas is output in a picture format to obtain the multimedia texture corresponding to the text or the graphics; the picture can be directly used as a multimedia texture; the video is drawn on an HTMLVideoElement, and when a browser renders, a frame picture of the video is obtained, wherein the frame picture is a multimedia texture corresponding to the video.
S2: and establishing a virtual camera model matched with the multimedia according to the type, the position and the size of the multimedia display.
In a specific embodiment, when the virtual camera model is established, a corresponding virtual camera is added for each multimedia source in a three-dimensional scene, optionally, the virtual camera adopts an orthogonal camera or a perspective camera, and virtual camera parameters are adjusted according to the type (front view, perspective), position and size of the multimedia display, and the virtual camera parameters include a camera position, a camera sight line, an upward direction and a view volume.
S3: and calculating a projection matrix and a view matrix of each virtual camera in the virtual camera model, and obtaining a projection view matrix (the projection matrix is multiplied by the view matrix) according to the projection matrix and the view matrix.
S4: and defining a fragment shader of the WebGL, taking the depth value of each fragment as a fragment color value, then performing scene rendering, and storing a rendering result in a color buffer to obtain depth textures.
In a specific embodiment, when the depth texture is obtained, a depth value of a point on a virtual camera model in a three-dimensional scene in a clipping space is used as a fragment color value, and since the depth value is a floating point number between 0 and 1, the numerical value is usually very small, and the floating point number is converted into an RGBA shaping color value by using an efficient coding mode; the rendered result is stored in a color buffer, i.e., the depth texture is obtained.
S5: and splicing the multimedia texture, the projection view matrix and the depth texture into an array, taking the array as a shader uniform variable when rendering a scene, and then performing normal rendering to obtain a fusion result of the multimedia and the three-dimensional scene.
In a specific embodiment, during normal rendering, an opaque object and a semi-transparent object in a scene are rendered respectively to obtain a rendering result I of the opaque object and a rendering result II of the semi-transparent object, and then the rendering result I and the rendering result II are mixed to obtain a fusion result of the multimedia and the three-dimensional scene. Optionally, the rendering steps of the opaque objects and the translucent objects are the same, each comprising the sub-steps of:
s51: in a vertex shader for normal rendering, coordinates of a vertex in a clipping space of each virtual camera are calculated, and the calculation formula is as follows: projection view matrix x model matrix x vertex coordinates.
It should be noted that the model matrix refers to a matrix formed by a new model obtained by deforming an original model through scaling, translation, rotation, and the like, and the original model together, and is the prior art.
S52: converting the coordinates in the cutting space into points in a standard equipment space, wherein the conversion formula is as follows: position.xyz/position.w; where position.xyz is the x, y, z component of the coordinate and position.w is the w component of the coordinate.
S53: judging whether the vertex is in a clipping space or not according to the conversion result of the point in the standard equipment space, and if not, not processing; if so, the process proceeds to step S54.
The standard equipment space (x, y, z) has a range of [ -1,1], where points within this range are in clip space and points not within this range are not.
S54: comparing the vertex in the clipping space with the depth value in the depth texture array, judging whether the vertex in the clipping space is visible or not, and if not, not processing; if yes, the color value of the fragment is obtained from the multimedia texture array, and the process proceeds to step S55.
In a specific embodiment, the color value of a point in the depth texture array is obtained through texture2D (tex) and decoded into a depth value; and judging whether the z value of the point in the standard equipment space is not larger than the depth value, if so, the fragment is visible, otherwise, the fragment is invisible.
S55: mixing a plurality of multimedia color values and normal colors, and keeping a rendering result in a color buffer area;
through steps S51 to S55, a rendering result one of the opaque objects and a rendering result two of the translucent objects are obtained.
The invention realizes the fusion of multimedia and three-dimensional scenes based on dynamic textures and shadow maps by reorganizing the rendering pipeline of WebGL. Based on the Shadow Mapping principle, the range of the multimedia presentation can be kept consistent with the visual range of the virtual camera. When the multimedia is used as the shadow map to be fused with the scene, the shadow map can be subjected to additional processing according to specific requirements (such as image recognition and adding a recognition result to a texture), and the mixed mode can also be subjected to additional processing. When a plurality of multimedia sources and scenes are mixed, the corresponding texture and transformation matrix of each multimedia source are transmitted into a rendering pipeline of the current scene as a uniform array. The maximum number of virtual cameras supported is determined by the physical device. When the three-dimensional scene contains semitransparent materials, splitting a shadow map and a scene mixing pipeline, namely: and respectively rendering the opaque model and the semitransparent model in the shadow map and the scene, and then mixing the two rendered results.
In a specific embodiment, taking a certain substation as an example, the multimedia and three-dimensional scene fusion method is adopted to perform the fusion of multimedia and three-dimensional scenes, and the result is shown in fig. 2. As can be seen from the figure 2, the video fusion technology can be used for fusing a three-dimensional model and a real-time monitoring video of the conservator meter of the transformer and monitoring the value of the conservator meter in real time.
In another specific embodiment, the present invention is used to perform video fusion on a pipeline, and three-dimensional scenes before and after video fusion are shown in fig. 3 and 4. As can be seen from fig. 3 and 4, the real-time monitoring video and the three-dimensional model can be fused to achieve the effect of combining virtuality and reality.
On the other hand, the invention also provides a system for fusing the multimedia and the three-dimensional scene based on the WebGL, which comprises the following steps:
the multimedia source processing module is used for processing a multimedia source to convert the multimedia source into a picture when the multimedia source is rendered by a browser, and the picture is taken as a texture to obtain a multimedia texture;
the virtual camera model establishing module is used for establishing a virtual camera model matched with the multimedia;
the multimedia fusion processing module is used for acquiring a projection view matrix and a depth texture, splicing the multimedia texture, the projection view matrix and the depth texture into an array, using the array as a shader uniform variable during scene rendering, and then performing normal rendering to fuse the multimedia and the three-dimensional scene.
In conclusion, the method and the device can solve the problems that the relevance between the monitoring video and the three-dimensional scene is poor, the time of the three-dimensional scene is inconsistent with the integral fusion of the space, the three-dimensional scene and the video are poor in linkage and the like, and achieve the reality that the three-dimensional model and the real scene are fused virtually and realistically. Compared with the prior art, the invention has remarkable progress.
Although the present invention has been described with reference to a preferred embodiment, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method for fusing multimedia and three-dimensional scenes based on WebGL is characterized by comprising the following steps:
s1: processing a multimedia source to convert the multimedia source into a picture when a browser renders the multimedia source, and taking the picture as a texture to obtain a multimedia texture;
s2: establishing a virtual camera model matched with the multimedia according to the type, position and size of the multimedia display;
s3: calculating a projection matrix and a view matrix of each virtual camera in the virtual camera model, and obtaining a projection view matrix according to the projection matrix and the view matrix;
s4: defining a fragment shader of WebGL, taking the depth value of each fragment as a fragment color value, then performing scene rendering, and storing a rendering result in a color buffer to obtain depth texture;
s5: and splicing the multimedia texture, the projection view matrix and the depth texture into an array, taking the array as a shader uniform variable when rendering a scene, and then performing normal rendering to obtain a fusion result of the multimedia and the three-dimensional scene.
2. The method for fusing the WebGL-based multimedia and the three-dimensional scene as claimed in claim 1, wherein in step S1, the multimedia source comprises any one or more of a text, a graphic, a picture, and a video;
drawing characters or graphics on an HTMLCanvas element, and outputting the Canvas in a picture format to obtain multimedia textures corresponding to the characters or the graphics;
the method comprises the steps of drawing a video on an HTMLVideoElement, and obtaining a frame picture of the video when a browser renders, wherein the frame picture is a multimedia texture corresponding to the video.
3. The method for fusing WebGL of claim 1, wherein in step S2, an orthogonal camera or a perspective camera is adopted when the virtual camera model is established, and virtual camera parameters are adjusted according to the type, position and size of the multimedia display, wherein the virtual camera parameters comprise a camera position, a camera sight line, an upward direction and a view volume.
4. The method as claimed in claim 1, wherein in step S4, when obtaining the depth texture, the depth value of a point on the virtual camera model in the three-dimensional scene in the clipping space is taken as a fragment color value, and a floating point number of the depth value is converted into an RGBA reshaped color value.
5. The method for fusing the WebGL of any one of claims 1-4, wherein in step S5, during normal rendering, an opaque object and a semi-transparent object in a scene are rendered respectively, a rendering result I of the opaque object and a rendering result II of the semi-transparent object are obtained, and then the rendering result I and the rendering result II are mixed to obtain a fusion result of the multimedia and the three-dimensional scene.
6. The method for fusing WebGL of claim 5, wherein the rendering steps of the opaque object and the semitransparent object are the same, and each rendering step comprises the following sub-steps:
s51: in a vertex shader for normal rendering, calculating coordinates of a vertex in clipping space at each virtual camera;
s52: converting the coordinates in the clipping space to points in a standard equipment space;
s53: judging whether the vertex is in a clipping space or not according to the conversion result of the point in the standard equipment space, and if not, not processing; if yes, go to step S54;
s54: comparing the vertex in the clipping space with the depth value in the depth texture array, judging whether the vertex in the clipping space is visible or not, and if not, not processing; if yes, obtaining the color value of the fragment from the multimedia texture array, and entering step S55;
s55: mixing a plurality of multimedia color values and normal colors, and keeping a rendering result in a color buffer area;
through steps S51 to S55, a rendering result one of the opaque objects and a rendering result two of the translucent objects are obtained.
7. The method of fusing WebGL of claim 6, wherein in step S51, the coordinates of the vertex in the clipping space of each virtual camera are calculated by the following formula: projection view matrix model matrix vertex coordinates.
8. The method for fusing WebGL of claim 6, wherein in step S52, the formula for conversion is as follows: position.xyz/position.w; where position.xyz is the x, y, z component of the coordinate and position.w is the w component of the coordinate;
the standard equipment space (x, y, z) has a range of [ -1,1], and points within this range are in clipping space and points not within this range are not.
9. The method of claim 6, wherein in step S54, color values of points in the depth texture array are obtained through texture2D (tex, fragCoord.xy) and decoded into depth values; and judging whether the z value of the point in the standard equipment space is not larger than the depth value, if so, the fragment is visible, otherwise, the fragment is invisible.
10. A system for fusing multimedia and three-dimensional scenes based on WebGL, which is characterized by comprising:
the multimedia source processing module is used for processing a multimedia source to convert the multimedia source into a picture when the multimedia source is rendered by a browser, and the picture is taken as a texture to obtain a multimedia texture;
the virtual camera model establishing module is used for establishing a virtual camera model matched with the multimedia;
and the multimedia fusion processing module is used for acquiring the projection view matrix and the depth texture, splicing the multimedia texture, the projection view matrix and the depth texture into an array, using the array as a shader uniform variable during scene rendering, and then performing normal rendering to fuse the multimedia and the three-dimensional scene.
CN202211386797.4A 2022-11-07 2022-11-07 Method and system for fusing multimedia and three-dimensional scene based on WebGL Pending CN115761188A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211386797.4A CN115761188A (en) 2022-11-07 2022-11-07 Method and system for fusing multimedia and three-dimensional scene based on WebGL

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211386797.4A CN115761188A (en) 2022-11-07 2022-11-07 Method and system for fusing multimedia and three-dimensional scene based on WebGL

Publications (1)

Publication Number Publication Date
CN115761188A true CN115761188A (en) 2023-03-07

Family

ID=85357181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211386797.4A Pending CN115761188A (en) 2022-11-07 2022-11-07 Method and system for fusing multimedia and three-dimensional scene based on WebGL

Country Status (1)

Country Link
CN (1) CN115761188A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117557711A (en) * 2024-01-12 2024-02-13 中科图新(苏州)科技有限公司 Method, device, computer equipment and storage medium for determining visual field

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150049086A1 (en) * 2013-08-16 2015-02-19 Genius Matcher Ltd. 3D Space Content Visualization System
CN111354062A (en) * 2020-01-17 2020-06-30 中国人民解放军战略支援部队信息工程大学 Multi-dimensional spatial data rendering method and device
CN111508052A (en) * 2020-04-23 2020-08-07 网易(杭州)网络有限公司 Rendering method and device of three-dimensional grid body
CN112437276A (en) * 2020-11-20 2021-03-02 埃洛克航空科技(北京)有限公司 WebGL-based three-dimensional video fusion method and system
CN112446943A (en) * 2019-08-14 2021-03-05 中移(苏州)软件技术有限公司 Image rendering method and device and computer readable storage medium
CN113283543A (en) * 2021-06-24 2021-08-20 北京优锘科技有限公司 WebGL-based image projection fusion method, device, storage medium and equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150049086A1 (en) * 2013-08-16 2015-02-19 Genius Matcher Ltd. 3D Space Content Visualization System
CN112446943A (en) * 2019-08-14 2021-03-05 中移(苏州)软件技术有限公司 Image rendering method and device and computer readable storage medium
CN111354062A (en) * 2020-01-17 2020-06-30 中国人民解放军战略支援部队信息工程大学 Multi-dimensional spatial data rendering method and device
CN111508052A (en) * 2020-04-23 2020-08-07 网易(杭州)网络有限公司 Rendering method and device of three-dimensional grid body
CN112437276A (en) * 2020-11-20 2021-03-02 埃洛克航空科技(北京)有限公司 WebGL-based three-dimensional video fusion method and system
CN113283543A (en) * 2021-06-24 2021-08-20 北京优锘科技有限公司 WebGL-based image projection fusion method, device, storage medium and equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李可欣等: "海岛三维信息可视化***的构建及实现", 海洋信息, vol. 36, no. 02 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117557711A (en) * 2024-01-12 2024-02-13 中科图新(苏州)科技有限公司 Method, device, computer equipment and storage medium for determining visual field
CN117557711B (en) * 2024-01-12 2024-04-09 中科图新(苏州)科技有限公司 Method, device, computer equipment and storage medium for determining visual field

Similar Documents

Publication Publication Date Title
US8134556B2 (en) Method and apparatus for real-time 3D viewer with ray trace on demand
KR100335306B1 (en) Method and apparatus for displaying panoramas with streaming video
CN111161392B (en) Video generation method and device and computer system
JPH07262410A (en) Method and device for synthesizing picture
CN107767437B (en) Multilayer mixed asynchronous rendering method
CN114219902A (en) Volume rendering method and device for meteorological data and computer equipment
GB2406252A (en) Generation of texture maps for use in 3D computer graphics
CN111862295A (en) Virtual object display method, device, equipment and storage medium
CN107005689B (en) Digital video rendering
CN103177467A (en) Method for creating naked eye 3D (three-dimensional) subtitles by using Direct 3D technology
CN111462205B (en) Image data deformation, live broadcast method and device, electronic equipment and storage medium
CN113012270A (en) Stereoscopic display method and device, electronic equipment and storage medium
CN115761188A (en) Method and system for fusing multimedia and three-dimensional scene based on WebGL
Whitted et al. A software test-bed for the development of 3-D raster graphics systems
KR20090000729A (en) System and method for web based cyber model house
US8669996B2 (en) Image processing device and image processing method
CN115311395A (en) Three-dimensional scene rendering method, device and equipment
JP4272891B2 (en) Apparatus, server, system and method for generating mutual photometric effect
JP2003168130A (en) System for previewing photorealistic rendering of synthetic scene in real-time
Eren et al. Object-based video manipulation and composition using 2D meshes in VRML
Trapp Interactive rendering techniques for focus+ context visualization of 3d geovirtual environments
WO2003034343A1 (en) Hierarchical sort of objects linked in virtual three-dimensional space
KR20210052005A (en) Method for augmenting video content in a 3-dimensional environment
CN111599011A (en) WebGL technology-based rapid construction method and system for power system scene
Feldman et al. Interactive 2D to 3D stereoscopic image synthesis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20230307