CN116366827A - High-precision large-scene image processing and transmitting method and device facing web end - Google Patents

High-precision large-scene image processing and transmitting method and device facing web end Download PDF

Info

Publication number
CN116366827A
CN116366827A CN202310077909.6A CN202310077909A CN116366827A CN 116366827 A CN116366827 A CN 116366827A CN 202310077909 A CN202310077909 A CN 202310077909A CN 116366827 A CN116366827 A CN 116366827A
Authority
CN
China
Prior art keywords
view cone
error
bounding volume
nodes
screen space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310077909.6A
Other languages
Chinese (zh)
Other versions
CN116366827B (en
Inventor
耿琛明
沈旭昆
胡勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunnan Innovation Institute of Beihang University
Original Assignee
Yunnan Innovation Institute of Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunnan Innovation Institute of Beihang University filed Critical Yunnan Innovation Institute of Beihang University
Priority to CN202310077909.6A priority Critical patent/CN116366827B/en
Publication of CN116366827A publication Critical patent/CN116366827A/en
Application granted granted Critical
Publication of CN116366827B publication Critical patent/CN116366827B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/194Transmission of image signals
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a web-end-oriented high-precision large-scene image processing and transmitting method and a device thereof, wherein the method comprises the following steps: step S1: performing space division based on kd tree on the model image generated by aerial photography to obtain a division map; step S2: model simplification is carried out on each divided block in the division map, and a simplified model map is obtained; step S3: and transmitting after delaying loading when the simplified model diagram is loaded on the web side. Compared with the prior art, the method is a scene division method based on the kd tree, the division can not ensure the geometric characteristics in each partition, reduces the resource use at the same time, and provides random access of grid flow for users. The method has the advantages of realizing reasonable division of large-scale scenes, accelerating the speed of loading the large scenes, optimizing the model simplification effect and facilitating the subsequent transmission work.

Description

High-precision large-scene image processing and transmitting method and device facing web end
Technical Field
The application relates to the technical field, in particular to a web-end-oriented high-precision large-scene image processing and transmitting method and a web-end-oriented high-precision large-scene image processing and transmitting device.
Background
With the popularization of the internet, the three-dimensional interaction technology is used for displaying the three-dimensional model and the scene, so that the three-dimensional model and the scene play an important role in enhancing the sense of reality and immersion of the user experience. Currently, with the rapid development of three-dimensional data acquisition and modeling technology, the data volume of a three-dimensional model represented by a polygonal grid is continuously increased, which brings a lot of difficulties to digital storage, transmission and drawing of exhibits.
In fact, most existing systems have delay problems due to too long data downloading time, lack of adaptability to networks and client devices, and problems that model topology and attribute characteristics cannot be maintained, three-dimensional scenes are not supported by simplifying generation of multi-level-of-detail models, and the like are caused, and the defects seriously affect quality of user experience. The root cause of these problems is that three-dimensional geometric models are transmitted through grids, but the transmission speed of the network cannot meet the demands of users.
Thus, studying how to efficiently transmit in real time the three-dimensional geometric information required by the user over the network has always been an important issue in graphic studies.
In recent years, with the development of technology, the three-dimensional model is more convenient and easier to acquire, and the application of the three-dimensional model is more extensive. The three-dimensional model interaction is used as a novel information interaction mode, and a more visual information representation means is provided compared with the two-dimensional model interaction. However, such three-dimensional model interactive applications require users to download and install applications, which severely hamper large-scale cross-platform applications and popularization due to portability, cross-platform operation and information interoperability limitations, one of the solutions is to develop web-oriented three-dimensional model interactive applications. The three-dimensional model interaction application program based on the web browser has portability and universality, and provides a basis for sharing three-dimensional model information across platforms.
Three-dimensional model rendering is a web-oriented three-dimensional model interactive application key support technology. However, efficient rendering and dynamic interaction of web-oriented three-dimensional models presents challenges. On the one hand, web browser platforms have low performance compared to the computing power between local applications due to the inefficiency of JavaScript interpretation, and web applications cannot efficiently handle large amounts of data and complex computations. On the other hand, three-dimensional models contain more data than two-dimensional models. As business requirements increase in complexity, these models also contain a large amount of attribute data such as normals, lighting, and the like, and complex interaction requirements further increase the amount of data used for web browser 3D rendering. For the reasons of the above two aspects, the computing power of the web platform and the data size of the 3D model affect the user experience of the web application.
Since the advent of WebGL in 2011, webGL-based third-party JavaScript3D rendering engines have been implemented, such as three. Js, babylon. Js, etc., that can load the entire model file at once.
But this mechanism has the following problems in Web3D applications:
firstly, model loading and rendering delay caused by a one-time model data loading and rendering mechanism based on a synchronous communication mode is larger, and in the synchronous data communication mode, a client side can load and render after waiting for all model data transmission to be completed. On the one hand, insufficient or unstable bandwidth of the mobile radio network may cause delays in model loading.
Furthermore, for web platforms with weak computing power, there is considerable delay in rendering large 3D models at once, and browser pages may even get stuck.
Traditionally, the problem of large data delays has been solved using a progressive grid approach in three-dimensional applications. However, due to the complex model decompression process, the web platform is subject to greater computational pressure, which causes an increase in user response delay, resulting in customers abandoning this approach. End users cannot afford large-scale, complex interactive 3D applications.
Therefore, an asynchronous decentralized three-dimensional model transmission method is needed to solve the network congestion problem caused by one-time loading. In addition, the problem of computational redundancy in a one-time loading rendering model data mechanism needs to be considered, the three-dimensional model carries a large amount of data related to the dynamic interaction requirement of a user besides model topology data, and the data are not needed when the rendering is initialized in the model, but are used in subsequent user interaction. The one-time model loading and rendering mechanism loads and renders all model structure data and the like, so that extra loading and computing pressure is brought to a web platform with weaker computing capacity, and the computing response delay of the service is longer.
In the conventional cloud and web terminal transmission method, complex interactions and large data volume three-dimensional model rendering computation cause an increase in communication cost and a delay in user service due to a shortage of computing power of a terminal. Accordingly, there is a need for an efficient method of mitigating model initialization and data rendering to mitigate computational pressure of web-based terminals.
Based on the above two discussions, using a one-time model-loading rendering mechanism to build more complex web3D applications may lead to serious service delay and even disruption problems. Therefore, it is important to research a three-dimensional model loading and rendering mechanism in a web platform environment with an interactive function.
Disclosure of Invention
Aiming at the technical problems, the application provides a web-end-oriented high-precision large-scene image processing and transmitting method and a web-end-oriented high-precision large-scene image processing and transmitting device, and the web-end-oriented high-precision large-scene streaming problem is effectively solved.
The application provides a web-end-oriented high-precision large-scene image processing and transmitting method, which comprises the following steps:
step S1: performing space division based on kd tree on the model image generated by aerial photography to obtain a division map;
step S2: model simplification is carried out on each divided block in the division map, and a simplified model map is obtained;
step S3: after the simplified model diagram is loaded on the web side, delaying loading and then transmitting;
wherein, step S2 includes the following steps:
step S21: judging whether each visible area block in the partition map is recursively transmitted to leaf nodes of the partition map or not:
step S211: if the judgment result is negative, checking the position relation between the bounding box and the view cone in the partitioning graph, and if the bounding box is positioned outside the view cone, setting all nodes of the bounding box to be invisible, setting child nodes of the bounding box outside the view cone to be invisible, so as to eliminate all bounding box nodes outside the view cone;
step S212: if the bounding volume intersects with the view cone, continuing recursively checking the position relation between the child nodes of the bounding volume and the view cone, if the child nodes of the bounding volume are positioned in the view cone, continuing to judge whether the error of the current node is larger than the maximum screen space error, and if so, loading the highest-level detail model; if the judgment result is negative, selecting a proper level to load the corresponding partition map region block according to the screen space error;
wherein the maximum screen space error is calculated as:
Figure BDA0004066699590000041
Figure BDA0004066699590000042
Figure BDA0004066699590000043
wherein: g is the geometric error of the block, read from the index file; d is the nearest distance from the viewpoint to the block; k is the perspective scaling factor and height is the screen height in pixels; fov is the angle between the upper and lower clipping planes of the perspective frustum; aspectRadio is the aspect ratio of the near clipping plane of the perspective frustum; sse is the screen space error and,
if the child node of the bounding volume does not intersect the view cone and is set to be invisible outside the view cone;
step S221: if the determination in step S21 is yes, the positional relationship of the bounding volume and the view cone is checked:
step S222: if the bounding volume is located outside the view cone, all nodes of the bounding volume are set to be invisible, and if the child nodes of the bounding volume are also outside the view cone, all nodes of the bounding volume outside the view cone are eliminated;
step S23: if the bounding volume is positioned in the view cone, judging whether the error of the current node is larger than the maximum screen space error, and if so, loading the highest-level detail model; if not, selecting an appropriate level for loading according to the screen space error.
Preferably, step S1 comprises the steps of:
step S11: acquiring grids M of surrounding boxes BM for all vertexes VM of each object in the image in the model image;
step S12: sequentially determining the longest axis AM among three axes of each bounding box BM;
step S13: obtaining a midpoint MVM which is sequenced from the vertex VM according to the sequence of each longest axis AM;
step S14: vertex VM is split from midpoint MVM: partition ML and partition MR;
step S15: if midpoint VML of partition ML or midpoint VMR of partition MR is greater than N, n is the number of vertices specified by the user in a partition And repeating the steps S11 to S14, and obtaining the division diagram of the tree data structure after finishing the operation according to the steps.
Another aspect of the present application further provides an apparatus for a web-side-oriented high-precision large-scene image processing method according to claim 1 or 2, including: the division module is used for carrying out space division based on kd tree on the model image generated by aerial photography to obtain a division graph;
the simplified model module is used for carrying out model simplification on each divided block in the division map to obtain a simplified model map;
and the transmission module is used for transmitting after the simplified model diagram is loaded on the web terminal and the loading is delayed.
Preferably, the simplified model module comprises:
the first judging module is used for judging whether each visible area block in the partition map is recursively transmitted to leaf nodes of the partition map;
a first judging module, configured to check a positional relationship between the bounding volume and the view cone if the judgment result in step S21 is yes:
the loading module is used for judging whether the error of the current node is larger than the maximum screen space error if the bounding volume is positioned in the view cone, and loading the highest-level detail model if the error of the current node is larger than the maximum screen space error; if not, selecting an appropriate level for loading according to the screen space error.
Preferably, the first judging module includes:
the first setting module is used for checking the position relation between the bounding box and the view cone in the division graph if the judgment result is negative, setting all nodes of the bounding box to be invisible if the bounding box is positioned outside the view cone, setting sub-nodes of the bounding box outside the view cone to be invisible, and eliminating all bounding box nodes outside the view cone;
the first error calculation module is used for continuing recursively checking the position relation between the child node of the bounding volume and the view cone if the bounding volume intersects the view cone, continuing to judge whether the error of the current node is larger than the maximum screen space error if the child node of the bounding volume is positioned in the view cone, and loading the highest-level detail model if the error of the current node is larger than the maximum screen space error; if the judgment result is negative, selecting a proper level to load the corresponding partition map region block according to the screen space error;
wherein the maximum screen space error is calculated as:
Figure BDA0004066699590000061
Figure BDA0004066699590000062
Figure BDA0004066699590000063
wherein: g is the geometric error of the block, read from the index file; d is the nearest distance from the viewpoint to the block; k is the perspective scaling factor and height is the screen height in pixels; fov is the angle between the upper and lower clipping planes of the perspective frustum; aspectRadio is the aspect ratio of the near clipping plane of the perspective frustum; sse is the screen space error and,
the child nodes of the bounding volume are set to be invisible if they do not intersect the view cone and are outside the view cone.
Preferably, the second judging module includes:
the second setting module is used for setting all nodes of the bounding volume to be invisible if the bounding volume is positioned outside the view cone, and setting all nodes of the bounding volume outside the view cone to be invisible if the child nodes of the bounding volume are also positioned outside the view cone, so that all nodes of the bounding volume outside the view cone are eliminated;
the second error calculation module is used for judging whether the error of the current node is larger than the maximum screen space error if the bounding volume is positioned in the view cone, and loading the highest-level detail model if the error of the current node is larger than the maximum screen space error; if not, selecting an appropriate level for loading according to the screen space error.
The beneficial effects that this application can produce include:
1) Compared with the prior art, the method for processing and transmitting the high-precision large scene image facing the web end is a scene division method based on the kd tree, the division can not guarantee the geometric characteristics in each partition, the resource use at the same time is reduced, and the random access of the grid flow is provided for users. The method has the advantages of realizing reasonable division of large-scale scenes, accelerating the speed of loading the large scenes, optimizing the model simplification effect and facilitating the subsequent transmission work.
2) According to the web-end-oriented high-precision large-scene image processing and transmitting method, the screen space error is used for judging, and the reasonable maxSSE parameter is set, so that the closer to the center of the screen, the higher the resolution of the loaded block is, and compared with other algorithms, the region of interest of a user can be loaded more accurately.
Drawings
FIG. 1 is a schematic diagram of a result obtained after performing kd-Tree algorithm space division by a processing example in an embodiment of the present application;
FIG. 2 is a network loading time chart of a simplified model obtained by processing the results shown in FIG. 1 by the methods of examples and comparative examples provided in the present application; wherein a) is a loading time chart obtained by a comparative example method; b) Providing a loading time chart obtained by the method for the application;
FIG. 3 is a process diagram (Hewan village) of an embodiment of the present application, wherein the red circle is the center of the viewpoint of the picture;
fig. 4 is a graph showing the result of the processing of fig. 3 by the method provided in the embodiment of the present application, where the red circle is the view center range, and the definition of the picture in the range is significantly higher than that of fig. 3;
fig. 5 is a schematic flow chart of a web-end-oriented high-precision large-scene image processing and transmitting method provided by the application;
fig. 6 is a schematic diagram of a high-precision large-scene image processing and transmitting device facing to a web end.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, based on the embodiments of the invention, which are apparent to those of ordinary skill in the art without inventive faculty, are intended to be within the scope of the invention.
The technical features that are not used for solving the technical problems of the present application are all set or installed according to the methods commonly used in the prior art, and are not described herein.
Referring to fig. 5, the high-precision large scene image processing and transmitting method facing to the web end provided by the application comprises the following steps:
step S1: performing space division based on kd tree on the model image generated by aerial photography to obtain a division map;
step S2: model simplification is carried out on each divided block in the division map, and a simplified model map is obtained;
step S3: after the simplified model diagram is loaded on the web side, delaying loading and then transmitting;
wherein, step S2 includes the following steps:
step S21: judging whether each visible area block in the partition map is recursively transmitted to leaf nodes of the partition map or not:
step S211: if the judgment result is negative, checking the position relation between the bounding box and the view cone in the partitioning graph, and if the bounding box is positioned outside the view cone, setting all nodes of the bounding box to be invisible, setting child nodes of the bounding box outside the view cone to be invisible, so as to eliminate all bounding box nodes outside the view cone;
step S212: if the bounding volume intersects with the view cone, continuing recursively checking the position relation between the child nodes of the bounding volume and the view cone, if the child nodes of the bounding volume are positioned in the view cone, continuing to judge whether the error of the current node is larger than the maximum screen space error, and if so, loading the highest-level detail model; if the judgment result is negative, selecting a proper level to load the corresponding partition map region block according to the screen space error;
wherein the maximum screen space error is calculated as:
Figure BDA0004066699590000081
Figure BDA0004066699590000082
Figure BDA0004066699590000083
wherein: g is the geometric error of the block, read from the index file; d is the nearest distance from the viewpoint to the block; k is the perspective scaling factor and height is the screen height in pixels; fov is the angle between the upper and lower clipping planes of the perspective frustum; aspectRadio is the aspect ratio of the near clipping plane of the perspective frustum; sse is the screen space error and,
if the child node of the bounding volume does not intersect the view cone and is set to be invisible outside the view cone;
step S221: if the determination in step S21 is yes, the positional relationship of the bounding volume and the view cone is checked:
step S222: if the bounding volume is located outside the view cone, all nodes of the bounding volume are set to be invisible, and if the child nodes of the bounding volume are also outside the view cone, all nodes of the bounding volume outside the view cone are eliminated;
step S23: if the bounding volume is positioned in the view cone, judging whether the error of the current node is larger than the maximum screen space error, and if so, loading the highest-level detail model; if not, selecting an appropriate level for loading according to the screen space error.
According to the method, according to the characteristic that the viewpoint center is the focus of people, the part close to the center is loaded firstly, and then the content of the edge part is loaded, by adopting the method, the updating transmission speed of the scene recorded by the picture can be improved, the rendering time of clear pictures is reduced, meanwhile, clear images of the viewpoint center of the picture are loaded for the user, the waiting discomfort of the user in the period of waiting for the picture to be rendered is reduced, the user can conveniently and quickly acquire picture information, the waiting time is shortened, and the rendering experience of the image rendering transmission user is effectively improved.
Preferably, step S1 comprises the steps of:
step S11: acquiring grids M of surrounding boxes BM for all vertexes VM of each object in the image in the model image;
step S12: sequentially determining the longest axis AM among three axes of each bounding box BM;
step S13: obtaining a midpoint MVM which is sequenced from the vertex VM according to the sequence of each longest axis AM;
step S14: vertex VM is split from midpoint MVM: partition ML and partition MR;
step S15: if midpoint VML of partition ML or midpoint VMR of partition MR is greater than N, n is the number of vertices specified by the user in a partition And repeating the steps S11 to S14, and obtaining the division diagram of the tree data structure after finishing the operation according to the steps.
The KD tree method is adopted to divide the rendering efficiency and accurately extract the viewpoint center area of the picture.
Another aspect of the present application also provides an apparatus according to the above method, including:
the division module is used for carrying out space division based on kd tree on the model image generated by aerial photography to obtain a division graph;
the simplified model module is used for carrying out model simplification on each divided block in the division map to obtain a simplified model map;
and the transmission module is used for transmitting after the simplified model diagram is loaded on the web terminal and the loading is delayed.
Preferably, the simplified model module comprises:
the first judging module is used for judging whether each visible area block in the partition map is recursively transmitted to leaf nodes of the partition map;
a first judging module, configured to check a positional relationship between the bounding volume and the view cone if the judgment result in step S21 is yes:
the loading module is used for judging whether the error of the current node is larger than the maximum screen space error if the bounding volume is positioned in the view cone, and loading the highest-level detail model if the error of the current node is larger than the maximum screen space error; if not, selecting an appropriate level for loading according to the screen space error.
Preferably, the first judging module includes:
the first setting module is used for checking the position relation between the bounding box and the view cone in the division graph if the judgment result is negative, setting all nodes of the bounding box to be invisible if the bounding box is positioned outside the view cone, setting sub-nodes of the bounding box outside the view cone to be invisible, and eliminating all bounding box nodes outside the view cone;
the first error calculation module is used for continuing recursively checking the position relation between the child node of the bounding volume and the view cone if the bounding volume intersects the view cone, continuing to judge whether the error of the current node is larger than the maximum screen space error if the child node of the bounding volume is positioned in the view cone, and loading the highest-level detail model if the error of the current node is larger than the maximum screen space error; if the judgment result is negative, selecting a proper level to load the corresponding partition map region block according to the screen space error;
wherein the maximum screen space error is calculated as:
Figure BDA0004066699590000101
Figure BDA0004066699590000102
Figure BDA0004066699590000103
wherein: g is the geometric error of the block, read from the index file; d is the nearest distance from the viewpoint to the block; k is the perspective scaling factor and height is the screen height in pixels; fov is the angle between the upper and lower clipping planes of the perspective frustum; aspectRadio is the aspect ratio of the near clipping plane of the perspective frustum; sse is the screen space error and,
the child nodes of the bounding volume are set to be invisible if they do not intersect the view cone and are outside the view cone.
Preferably, the second judging module includes:
the second setting module is used for setting all nodes of the bounding volume to be invisible if the bounding volume is positioned outside the view cone, and setting all nodes of the bounding volume outside the view cone to be invisible if the child nodes of the bounding volume are also positioned outside the view cone, so that all nodes of the bounding volume outside the view cone are eliminated;
the second error calculation module is used for judging whether the error of the current node is larger than the maximum screen space error if the bounding volume is positioned in the view cone, and loading the highest-level detail model if the error of the current node is larger than the maximum screen space error; if not, selecting an appropriate level for loading according to the screen space error.
Examples
The embodiment specifically comprises the following steps:
step S1: performing space division based on kd tree on the model image generated by aerial photography to obtain a division map, wherein the obtained result is shown in fig. 1;
step S1: the method specifically comprises the following steps:
step S11: acquiring grids M of surrounding boxes BM for all vertexes VM of each object in the image in the model image;
step S12: determining the longest axis AM among the three axes of each bounding box BM;
step S13: the midpoints MVM from the vertices VM are found, ordered in the order of the longest axes AM.
Step S14: the vertex VM is divided into two half areas from the midpoint MVM: partition ML and partition MR.
Step S15: if the midpoint VML of partition ML or the midpoint VMR of partition MR is greater than N (N is the number of vertices specified by the user in one partition) ) Repeating the steps S11 to S14, and obtaining a division diagram of the tree data structure after finishing the operation according to the steps;
step S2: model simplification is carried out on each divided block in the division map, and a simplified model map is obtained;
wherein, step S2: the method comprises the following steps:
step S21: judging whether the visible area block is recursively transmitted to leaf nodes of the partition map or not:
step S211: if the judgment result is that the recursion to the leaf node is not carried out, checking the position relation of the bounding box and the view cone:
step S212: if the bounding volume is outside the view cone, then all nodes of the bounding volume are not visible, and child nodes of the bounding volume are also not visible outside the view cone, so all bounding volume nodes outside its view cone are culled.
Step S213: if the bounding volume intersects the view cone, continuing to recursively check the positional relationship between the child nodes of the bounding volume and the view cone;
step S214: if the child node of the bounding volume is positioned in the view cone, judging whether the error of the current node is larger than the maximum screen space error, and if so, loading the highest-level detail model; if the determination result is no, a proper level is selected to load the corresponding block according to the screen space error, for example, in the embodiment, the levels are according to sse=0 to 1, specifically, four levels are selected, wherein sse=0 to 0.25 is L0, sse=0.25 to 0.5 is L1, sse=0.5 to 1 is L2, and sse=1 to 2 is L3.
The screen space error formula is as follows:
Figure BDA0004066699590000121
Figure BDA0004066699590000122
Figure BDA0004066699590000123
wherein: g is the geometric error of the block, read from the index file; d is the nearest distance from the viewpoint to the block; k is the perspective scaling factor, where height is the screen height in pixels; fov is the angle between the upper and lower clipping planes of the perspective frustum; aspectRadio is the aspect ratio of the near clipping plane of the perspective frustum.
Step S221: if the judgment result is recursion to the leaf nodes, checking the position relation of the bounding volume and the view cone:
step S222: if the bounding volume is outside the view cone, then the node is not visible, and child nodes of the bounding volume are also outside the view cone and are not visible. All nodes of the bounding volume outside the viewing cone are eliminated.
Step S23: if the bounding volume is positioned in the view cone, judging whether the error of the current node is larger than the maximum screen space error, and if so, loading the highest-level detail model; if not, selecting an appropriate level for loading according to the screen space error.
Step S3: and transmitting after delaying loading when the simplified model diagram is loaded on the web side.
Comparative example
The same objects as the embodiments are processed according to Shen Yongzeng, liu Dongyue, xu Jun. Design and implementation of octree-based virtual scene manager [ J ]. Computer system application, 2012,21 (03): 147-150+45. Octree partitioning algorithm disclosed in the examples.
Execution environment of the above steps in this embodiment: based on a 3.60GHz Intel (R) Core (TM) i9-9900KFCPU processor, a windows 1064-bit operating system of 32.0GB memory and an NVIDIAGTX1080TI display card. The application software comprises: thre. Js library.
The development software comprises the following steps: neuronDataReaderSDK, prepar3DSDK,3DMAX,visualstudio2017, developed language js.
The method for dividing the graph 1 by adopting the embodiment and the method for providing the comparison example respectively, the time required for the obtained division is referred to in the graph 2, as can be known from the graph 2 (time chart required for loading on the network), the kd-Tree method adopted in the application can finish space division only by 5000ms, and compared with the octree algorithm adopted in the comparison example, the space division can be finished by 7000ms, and as can be known from the result of the graph 2, the loading efficiency of the method obtained after the processing of the method provided by the application is higher. The model loading speed of the corresponding scene generated by using the kd-Tree algorithm is found to be about 17% faster than that of the model generated by using the octree algorithm, and the loading efficiency is improved.
The result obtained by processing fig. 3 by the method provided in the embodiment is shown in fig. 4, the red frames in fig. 3 to 4 are all view centers, in order to simplify the processing, the image display definition of the view center area in the image is improved compared with that in fig. 3, the time required for overall transmission and loading is shortened, and when the processed image is shown in fig. 4, a user can acquire view center information faster, the image transmission time is shortened, and the user experience is improved.
Since the algorithm is affected by the distance between the block and the center point of the screen, the closer to the center of the screen, the higher the resolution of the loaded block, on the premise of the same screen distance.
Although the present invention has been described with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described, or equivalents may be substituted for elements thereof, and any modifications, equivalents, improvements and changes may be made without departing from the spirit and principles of the present invention.

Claims (6)

1. The web-end-oriented high-precision large-scene image processing method is characterized by comprising the following steps of:
step S1: performing space division based on kd tree on the model image generated by aerial photography to obtain a division map;
step S2: model simplification is carried out on each divided block in the division map, and a simplified model map is obtained;
step S3: after the simplified model diagram is loaded on the web side, delaying loading and then transmitting;
wherein, step S2 includes the following steps:
step S21: judging whether each visible area block in the partition map is recursively transmitted to leaf nodes of the partition map or not:
step S211: if the judgment result is negative, checking the position relation between the bounding box and the view cone in the partitioning graph, and if the bounding box is positioned outside the view cone, setting all nodes of the bounding box to be invisible, setting child nodes of the bounding box outside the view cone to be invisible, so as to eliminate all bounding box nodes outside the view cone;
step S212: if the bounding volume intersects with the view cone, continuing recursively checking the position relation between the child nodes of the bounding volume and the view cone, if the child nodes of the bounding volume are positioned in the view cone, continuing to judge whether the error of the current node is larger than the maximum screen space error, and if so, loading the highest-level detail model; if the judgment result is negative, selecting a proper level to load the corresponding partition map region block according to the screen space error;
wherein the maximum screen space error is calculated as:
Figure FDA0004066699580000011
Figure FDA0004066699580000012
fov,aspectRatio≤1
Figure FDA0004066699580000013
wherein: g is the geometric error of the block, read from the index file; d is the nearest distance from the viewpoint to the block; k is the perspective scaling factor and height is the screen height in pixels; fov is the angle between the upper and lower clipping planes of the perspective frustum; aspectRadio is the aspect ratio of the near clipping plane of the perspective frustum; sse is the screen space error
If the child node of the bounding volume does not intersect the view cone and is set to be invisible outside the view cone;
step S221: if the determination in step S21 is yes, the positional relationship of the bounding volume and the view cone is checked:
step S222: if the bounding volume is located outside the view cone, all nodes of the bounding volume are set to be invisible, and if the child nodes of the bounding volume are also outside the view cone, all nodes of the bounding volume outside the view cone are eliminated;
step S23: if the bounding volume is positioned in the view cone, judging whether the error of the current node is larger than the maximum screen space error, and if so, loading the highest-level detail model; if not, selecting an appropriate level for loading according to the screen space error.
2. The web-side-oriented high-precision large-scene image processing method according to claim 1, wherein the step S1 comprises the steps of:
step S11: acquiring grids M of surrounding boxes BM for all vertexes VM of each object in the image in the model image;
step S12: sequentially determining the longest axis AM among three axes of each bounding box BM;
step S13: obtaining a midpoint MVM which is sequenced from the vertex VM according to the sequence of each longest axis AM;
step S14: vertex VM is split from midpoint MVM: partition ML and partition MR;
step S15: if midpoint VML of partition ML or midpoint VMR of partition MR is greater than N, n is the number of vertices specified by the user in a partition And repeating the steps S11 to S14, and obtaining the division diagram of the tree data structure after finishing the operation according to the steps.
3. A device for a web-side-oriented high-precision large-scene image processing method according to claim 1 or 2, comprising: the division module is used for carrying out space division based on kd tree on the model image generated by aerial photography to obtain a division graph;
the simplified model module is used for carrying out model simplification on each divided block in the division map to obtain a simplified model map;
and the transmission module is used for transmitting after the simplified model diagram is loaded on the web terminal and the loading is delayed.
4. The web-side-oriented high-precision large-scene image processing apparatus according to claim 3, wherein the simplified model module comprises:
the first judging module is used for judging whether each visible area block in the partition map is recursively transmitted to leaf nodes of the partition map;
a first judging module, configured to check a positional relationship between the bounding volume and the view cone if the judgment result in step S21 is yes:
the loading module is used for judging whether the error of the current node is larger than the maximum screen space error if the bounding volume is positioned in the view cone, and loading the highest-level detail model if the error of the current node is larger than the maximum screen space error; if not, selecting an appropriate level for loading according to the screen space error.
5. The web-side-oriented high-precision large-scene image processing apparatus according to claim 4, wherein the first judgment module comprises:
the first setting module is used for checking the position relation between the bounding box and the view cone in the division graph if the judgment result is negative, setting all nodes of the bounding box to be invisible if the bounding box is positioned outside the view cone, setting sub-nodes of the bounding box outside the view cone to be invisible, and eliminating all bounding box nodes outside the view cone;
the first error calculation module is used for continuing recursively checking the position relation between the child node of the bounding volume and the view cone if the bounding volume intersects the view cone, continuing to judge whether the error of the current node is larger than the maximum screen space error if the child node of the bounding volume is positioned in the view cone, and loading the highest-level detail model if the error of the current node is larger than the maximum screen space error; if the judgment result is negative, selecting a proper level to load the corresponding partition map region block according to the screen space error;
wherein the maximum screen space error is calculated as:
Figure FDA0004066699580000031
Figure FDA0004066699580000032
fov,aspectRatio≤1
Figure FDA0004066699580000033
wherein: g is the geometric error of the block, read from the index file; d is the nearest distance from the viewpoint to the block; k is the perspective scaling factor and height is the screen height in pixels; fov is the angle between the upper and lower clipping planes of the perspective frustum; aspectRadio is the aspect ratio of the near clipping plane of the perspective frustum; sse is the screen space error and,
the child nodes of the bounding volume are set to be invisible if they do not intersect the view cone and are outside the view cone.
6. The web-side-oriented high-precision large-scene image processing apparatus according to claim 4, wherein the second judging module comprises:
the second setting module is used for setting all nodes of the bounding volume to be invisible if the bounding volume is positioned outside the view cone, and setting all nodes of the bounding volume outside the view cone to be invisible if the child nodes of the bounding volume are also positioned outside the view cone, so that all nodes of the bounding volume outside the view cone are eliminated;
the second error calculation module is used for judging whether the error of the current node is larger than the maximum screen space error if the bounding volume is positioned in the view cone, and loading the highest-level detail model if the error of the current node is larger than the maximum screen space error; if not, selecting an appropriate level for loading according to the screen space error.
CN202310077909.6A 2023-01-13 2023-01-13 High-precision large-scene image processing and transmitting method and device facing web end Active CN116366827B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310077909.6A CN116366827B (en) 2023-01-13 2023-01-13 High-precision large-scene image processing and transmitting method and device facing web end

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310077909.6A CN116366827B (en) 2023-01-13 2023-01-13 High-precision large-scene image processing and transmitting method and device facing web end

Publications (2)

Publication Number Publication Date
CN116366827A true CN116366827A (en) 2023-06-30
CN116366827B CN116366827B (en) 2024-02-06

Family

ID=86940221

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310077909.6A Active CN116366827B (en) 2023-01-13 2023-01-13 High-precision large-scene image processing and transmitting method and device facing web end

Country Status (1)

Country Link
CN (1) CN116366827B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150035830A1 (en) * 2010-12-24 2015-02-05 Xiaopeng Zhang Method for real-time and realistic rendering of complex scenes on internet
CN106599493A (en) * 2016-12-19 2017-04-26 重庆市勘测院 Visual implementation method of BIM model in three-dimensional large scene
US20200020154A1 (en) * 2017-01-05 2020-01-16 Bricsys Nv Point Cloud Preprocessing and Rendering
CN110990737A (en) * 2019-12-09 2020-04-10 江苏艾佳家居用品有限公司 LOD-based lightweight loading method for indoor scene of web end
CN111968212A (en) * 2020-09-24 2020-11-20 中国测绘科学研究院 Viewpoint-based dynamic scheduling method for three-dimensional urban scene data
WO2021174659A1 (en) * 2020-03-04 2021-09-10 杭州群核信息技术有限公司 Webgl-based progressive real-time rendering method for editable large scene

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150035830A1 (en) * 2010-12-24 2015-02-05 Xiaopeng Zhang Method for real-time and realistic rendering of complex scenes on internet
CN106599493A (en) * 2016-12-19 2017-04-26 重庆市勘测院 Visual implementation method of BIM model in three-dimensional large scene
US20200020154A1 (en) * 2017-01-05 2020-01-16 Bricsys Nv Point Cloud Preprocessing and Rendering
CN110990737A (en) * 2019-12-09 2020-04-10 江苏艾佳家居用品有限公司 LOD-based lightweight loading method for indoor scene of web end
WO2021174659A1 (en) * 2020-03-04 2021-09-10 杭州群核信息技术有限公司 Webgl-based progressive real-time rendering method for editable large scene
CN111968212A (en) * 2020-09-24 2020-11-20 中国测绘科学研究院 Viewpoint-based dynamic scheduling method for three-dimensional urban scene data

Also Published As

Publication number Publication date
CN116366827B (en) 2024-02-06

Similar Documents

Publication Publication Date Title
CN111340928B (en) Ray tracing-combined real-time hybrid rendering method and device for Web end and computer equipment
WO2022193941A1 (en) Image rendering method and apparatus, device, medium, and computer program product
US8817025B1 (en) Efficiently implementing and displaying independent 3-dimensional interactive viewports of a virtual world on multiple client devices
CN111068312B (en) Game picture rendering method and device, storage medium and electronic equipment
CN114820905B (en) Virtual image generation method and device, electronic equipment and readable storage medium
CN112288873A (en) Rendering method and device, computer readable storage medium and electronic equipment
WO2023207963A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN110930492B (en) Model rendering method, device, computer readable medium and electronic equipment
Li et al. CEBOW: A Cloud‐Edge‐Browser Online Web3D approach for visualizing large BIM scenes
CN116366827B (en) High-precision large-scene image processing and transmitting method and device facing web end
CN112494941A (en) Display control method and device of virtual object, storage medium and electronic equipment
CN114820910A (en) Rendering method and device
JP7346741B2 (en) Methods, computer systems, and computer programs for freeview video coding
Wang et al. Performance bottleneck analysis and resource optimized distribution method for IoT cloud rendering computing system in cyber-enabled applications
CN114119831A (en) Snow accumulation model rendering method and device, electronic equipment and readable medium
CN114461959A (en) WEB side online display method and device of BIM data and electronic equipment
Pasman et al. Scheduling level of detail with guaranteed quality and cost
WO2023029424A1 (en) Method for rendering application and related device
Liang et al. A point-based rendering approach for real-time interaction on mobile devices
WO2024109006A1 (en) Light source elimination method and rendering engine
CN116402975B (en) Method and device for loading and rendering three-dimensional model in WEB platform environment
US20240203030A1 (en) 3d model rendering method and apparatus, electronic device, and storage medium
Jiang et al. A large-scale scene display system based on webgl
EP4258218A1 (en) Rendering method, device, and system
CN112465943B (en) Color rendering method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Hu Yong

Inventor after: Geng Chenming

Inventor after: Shen Xukun

Inventor before: Geng Chenming

Inventor before: Shen Xukun

Inventor before: Hu Yong

GR01 Patent grant
GR01 Patent grant