CN116129019A - Data processing method for oblique photography - Google Patents

Data processing method for oblique photography Download PDF

Info

Publication number
CN116129019A
CN116129019A CN202211728257.XA CN202211728257A CN116129019A CN 116129019 A CN116129019 A CN 116129019A CN 202211728257 A CN202211728257 A CN 202211728257A CN 116129019 A CN116129019 A CN 116129019A
Authority
CN
China
Prior art keywords
target
vertex
slice
data
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211728257.XA
Other languages
Chinese (zh)
Inventor
李刚
李冬林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Global Safety Technology Co Ltd
Original Assignee
Beijing Global Safety Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Global Safety Technology Co Ltd filed Critical Beijing Global Safety Technology Co Ltd
Priority to CN202211728257.XA priority Critical patent/CN116129019A/en
Publication of CN116129019A publication Critical patent/CN116129019A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)
  • Image Processing (AREA)

Abstract

The application provides a data processing method of oblique photography, which relates to the technical field of three-dimensional data processing, and comprises the following steps: acquiring raw data of oblique photography; root node merging is carried out on a plurality of original slices to obtain merged slices; vertex thinning is carried out on the vertexes in the combined slice so as to obtain each target vertex after thinning in the combined slice, and image processing is carried out on the texture image of the rendered combined slice so as to obtain the target texture corresponding to each combined slice; and generating target data of oblique photography based on the target vertexes of the merged slice and rendering the target textures of the merged slice so as to display the three-dimensional oblique photography image on the client based on the target data. Therefore, by reducing the number of slices and the data volume of the data of the oblique photography, the video memory occupation of the data of the oblique photography is effectively reduced, and the loading performance and the loading efficiency are improved.

Description

Data processing method for oblique photography
Technical Field
The application relates to the technical field of three-dimensional data processing, in particular to a data processing method for oblique photography.
Background
With the development of internet technology, various resource information is highly shared, and information sharing through Web services provided by the internet has become a common way. Currently, smart cities and digital earth are developed at a high speed in the three-dimensional technical field, and rendering and displaying of an oblique photography three-dimensional model on a Web end are a big hot spot problem of current Web three-dimensional visualization.
When the oblique photography data area is large, the number of slice files of the top node is huge, and how to enable the oblique photography data produced in the internet environment to be rapidly and stably loaded and smoothly interacted is very important.
Disclosure of Invention
The object of the present application is to solve at least to some extent one of the above technical problems.
Therefore, the application provides a data processing method of oblique photography, which is implemented by acquiring original data of the oblique photography, wherein the original data comprises a plurality of original slices and texture images for rendering each original slice, and each original slice comprises a plurality of vertexes for splitting the original slice; root node merging is carried out on a plurality of original slices to obtain merged slices; vertex thinning is carried out on the vertexes in the combined slice so as to obtain each target vertex after thinning in the combined slice, and image processing is carried out on the texture image of the rendered combined slice so as to obtain the target texture corresponding to each combined slice; and generating target data of oblique photography based on the target vertexes of the merged slice and rendering the target textures of the merged slice so as to display the three-dimensional oblique photography image on the client based on the target data. Therefore, on one hand, the number of slices is reduced, the number of times of requests for the slices during data loading is reduced, and on the other hand, the data volume of the data of the oblique photography is reduced, the video memory occupation of the data of the oblique photography is effectively reduced, and the loading performance and the loading efficiency are improved.
An embodiment of a first aspect of the present application provides a data processing method for oblique photography, including:
acquiring raw data of oblique photography, wherein the raw data comprises a plurality of raw slices and texture images for rendering the raw slices, and each raw slice comprises a plurality of vertexes for splitting the raw slice;
merging the plurality of original slices to obtain a merged slice;
performing vertex thinning on the vertexes in the merged slice to obtain each target vertex after thinning in the merged slice, and performing image processing on a texture image rendering the merged slice to obtain a target texture corresponding to each merged slice;
and generating target data of the oblique photography based on the target vertexes of the combined slices and rendering target textures of the combined slices so as to display three-dimensional oblique photography images on a client based on the target data.
Optionally, the merging the plurality of original slices to obtain a merged slice includes: reading the original data to reconstruct an LOD node model of each original slice, wherein the LOD node model comprises tree-shaped multi-layer nodes, and vertexes contained in the nodes of each layer are used for rendering images with corresponding resolutions; combining sub-layer nodes in the LOD node model by adopting a combining function to obtain a combined root node in the target model, wherein a first vertex contained in the combined root node comprises vertices contained in all sub-layer nodes before combination; determining a center point of the target model according to an outer surrounding box of the target model obtained by combining vertexes contained in the root node; updating the vertex positions of the vertices contained in each node in the target model according to the center point of the target model; the merged slice is determined based on vertex positions of vertices contained in the target model.
Optionally, the LOD node model adopts a quadtree structure, and the method further includes: recursively checking from the top layer whether the nodes in the hierarchy to be combined lack child nodes of the quadtree structure for the LOD node model; in the event that there is a miss, the parent node is copied as the child node of the miss.
Optionally, performing vertex thinning on vertices in the merged slice to obtain each thinned target vertex in the merged slice, including: obtaining vertex information for the vertices in the merged slice, wherein the vertex information comprises vertex position and vertex index data; and thinning the vertexes in the combined slice based on the vertex information by adopting a vertex clustering algorithm to obtain each target vertex after thinning in the combined slice.
Optionally, the vertex information further includes texture coordinates for indicating pixel points in the rendered image to render; the method further comprises the steps of: and updating the texture coordinates according to the vertex positions of the thinned target vertices.
Optionally, the performing image processing on the texture image of the merged slice to obtain a target texture corresponding to each merged slice includes: and reducing the resolution of the texture image rendering the combined slice, and/or performing image compression on the texture image rendering the combined slice to obtain the target texture corresponding to each combined slice.
Optionally, the reducing the resolution of the texture image rendering the merged slice includes: reducing the size of the texture image rendering the combined slice according to a set scaling factor, wherein the value of the scaling factor is smaller than 1; and determining the pixel value of each pixel in the target texture by adopting a cubic spline interpolation function based on the reduced image size.
Optionally, the image compressing the texture image rendering the merged slice includes: and according to the set compression mode and the set picture quality factor, adopting a set compression algorithm to compress the texture image.
Optionally, the target data includes rendering data; the method further comprises the steps of: generating a single primitive data block based on vertex information for at least two target vertices, wherein the vertex information comprises one or more combinations of vertex coordinates, vertex indices, texture coordinates, and normal vector structures; combining texture images according to at least two target textures required by rendering of the at least two target vertexes so as to obtain a single picture data block; updating texture coordinates in the single primitive data block based on the image size corresponding to the single picture data block and the image sizes of at least two target textures before texture image merging; and generating rendering data for indicating the GPU of the client to render according to the single primitive data block and the single picture data block.
Optionally, the method further comprises: and carrying out quantization processing on the vertex information of the floating point data type in the single primitive data block so as to convert the vertex information into the vertex information of the integer data type.
According to the data processing method for oblique photography, original data of oblique photography are obtained, wherein the original data comprise a plurality of original slices and texture images for rendering the original slices, and each original slice comprises a plurality of vertexes for splitting the original slice; root node merging is carried out on a plurality of original slices to obtain merged slices; vertex thinning is carried out on the vertexes in the combined slice so as to obtain each target vertex after thinning in the combined slice, and image processing is carried out on the texture image of the rendered combined slice so as to obtain the target texture corresponding to each combined slice; and generating target data of oblique photography based on the target vertexes of the merged slice and rendering the target textures of the merged slice so as to display the three-dimensional oblique photography image on the client based on the target data. Therefore, on one hand, the number of slices is reduced, the number of times of requests for the slices during data loading is reduced, and on the other hand, the data volume of the data of the oblique photography is reduced, the video memory occupation of the data of the oblique photography is effectively reduced, and the loading performance and the loading efficiency are improved.
An embodiment of a second aspect of the present application proposes a data processing apparatus for oblique photography, the apparatus comprising:
an acquisition module, configured to acquire raw data of oblique photography, where the raw data includes a plurality of raw slices and texture images for rendering each of the raw slices, and each of the raw slices includes a plurality of vertices for splitting the raw slice;
the first merging module is used for merging the plurality of original slices to obtain a merged slice;
the thinning module is used for carrying out vertex thinning on the vertexes in the combined slice so as to obtain each target vertex after thinning in the combined slice;
the first processing module is used for carrying out image processing on the texture image rendering the combined slices so as to obtain target textures corresponding to the combined slices;
the first generation module is used for generating target data of the oblique photography based on the target vertex of the combined slice and rendering target texture of the combined slice so as to display a three-dimensional oblique photography image on a client based on the target data.
Optionally, the merging the plurality of original slices to obtain a merged slice includes: reading the original data to reconstruct an LOD node model of each original slice, wherein the LOD node model comprises tree-shaped multi-layer nodes, and vertexes contained in the nodes of each layer are used for rendering images with corresponding resolutions; combining sub-layer nodes in the LOD node model by adopting a combining function to obtain a combined root node in the target model, wherein a first vertex contained in the combined root node comprises vertices contained in all sub-layer nodes before combination; determining a center point of the target model according to an outer surrounding box of the target model obtained by combining vertexes contained in the root node; updating the vertex positions of the vertices contained in each node in the target model according to the center point of the target model; the merged slice is determined based on vertex positions of vertices contained in the target model.
Optionally, the LOD node model adopts a quadtree structure, and the method further includes: recursively checking from the top layer whether the nodes in the hierarchy to be combined lack child nodes of the quadtree structure for the LOD node model; in the event that there is a miss, the parent node is copied as the child node of the miss.
Optionally, performing vertex thinning on vertices in the merged slice to obtain each thinned target vertex in the merged slice, including: obtaining vertex information for the vertices in the merged slice, wherein the vertex information comprises vertex position and vertex index data; and thinning the vertexes in the combined slice based on the vertex information by adopting a vertex clustering algorithm to obtain each target vertex after thinning in the combined slice.
Optionally, the vertex information further includes texture coordinates for indicating pixel points in the rendered image to render; the method further comprises the steps of: and updating the texture coordinates according to the vertex positions of the thinned target vertices.
Optionally, the performing image processing on the texture image of the merged slice to obtain a target texture corresponding to each merged slice includes: and reducing the resolution of the texture image rendering the combined slice, and/or performing image compression on the texture image rendering the combined slice to obtain the target texture corresponding to each combined slice.
Optionally, the reducing the resolution of the texture image rendering the merged slice includes: reducing the size of the texture image rendering the combined slice according to a set scaling factor, wherein the value of the scaling factor is smaller than 1; and determining the pixel value of each pixel in the target texture by adopting a cubic spline interpolation function based on the reduced image size.
Optionally, the image compressing the texture image rendering the merged slice includes: and according to the set compression mode and the set picture quality factor, adopting a set compression algorithm to compress the texture image.
Optionally, the target data includes rendering data; the method further comprises the steps of: generating a single primitive data block based on vertex information for at least two target vertices, wherein the vertex information comprises one or more combinations of vertex coordinates, vertex indices, texture coordinates, and normal vector structures; combining texture images according to at least two target textures required by rendering of the at least two target vertexes so as to obtain a single picture data block; updating texture coordinates in the single primitive data block based on the image size corresponding to the single picture data block and the image sizes of at least two target textures before texture image merging; and generating rendering data for indicating the GPU of the client to render according to the single primitive data block and the single picture data block.
According to the data processing device for oblique photography, original data of oblique photography are obtained, wherein the original data comprise a plurality of original slices and texture images for rendering the original slices, and each original slice comprises a plurality of vertexes for splitting the original slice; root node merging is carried out on a plurality of original slices to obtain merged slices; vertex thinning is carried out on the vertexes in the combined slice so as to obtain each target vertex after thinning in the combined slice, and image processing is carried out on the texture image of the rendered combined slice so as to obtain the target texture corresponding to each combined slice; and generating target data of oblique photography based on the target vertexes of the merged slice and rendering the target textures of the merged slice so as to display the three-dimensional oblique photography image on the client based on the target data. Therefore, on one hand, the number of slices is reduced, the number of times of requests for the slices during data loading is reduced, and on the other hand, the data volume of the data of the oblique photography is reduced, the video memory occupation of the data of the oblique photography is effectively reduced, and the loading performance and the loading efficiency are improved.
An embodiment of a fourth aspect of the present application proposes an electronic device, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the data processing method of oblique photography as described in the first aspect when the program is executed.
An embodiment of a fifth aspect of the present application proposes a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the data processing method of oblique photography as described in the first aspect.
Additional aspects and advantages of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
fig. 1 is a flowchart of a data processing method for oblique photography according to an embodiment of the present disclosure;
fig. 2 is a flow chart of a data processing method of oblique photography according to a second embodiment of the present disclosure;
fig. 3 is a flow chart of a data processing method of oblique photography according to a third embodiment of the present disclosure;
FIG. 4 is a flow chart of a data processing method of oblique photography provided by the present application;
fig. 5 is a schematic structural diagram of a data processing apparatus for oblique photography according to a fourth embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary and intended for the purpose of explaining the present application and are not to be construed as limiting the present application.
Oblique photography is a high-new technology developed in the field of international photogrammetry for more than ten years, and acquires rich high-resolution textures of the top surface and side view of a building by synchronously acquiring images from one vertical, four oblique and five different visual angles. The method can not only truly reflect the ground object condition and acquire the object texture information with high precision, but also generate a real three-dimensional city model through advanced positioning, fusion, modeling and other technologies. The technology is widely applied to industries such as emergency command, homeland security, city management, smart city, digital twinning and the like.
With the development of internet technology, various resource information is highly shared, and information sharing through Web services provided by the internet has become a common way, so that smart cities and digital earth are developed at a high speed in the three-dimensional technical field, and rendering and displaying of an oblique photography three-dimensional model on a Web end are a big hot problem of current Web three-dimensional visualization. When the oblique photographing data area is large, the top-level node can easily reach hundreds of thousands of slice files, when the oblique photographing data is browsed in the full view, the top-level slice files are requested to be loaded on the map one by one, so that the data are loaded and displayed in a slicing mode according to the slice nodes in the loading process, the loading time is usually more than 1 minute, and meanwhile, the rendering blocking problem exists in the dragging interaction process; and when the data is too large, the browser is easy to crash, so that the browsing interaction experience of the user is poor.
In view of the above problems, embodiments of the present application provide a data processing method for oblique photography.
The data processing method of oblique photography provided in the present application will be described in detail with reference to fig. 1.
The embodiment of the application can be applied to the server side.
Fig. 1 is a flowchart of a data processing method for oblique photography according to an embodiment of the present application.
As shown in fig. 1, the data processing method of oblique photography includes the steps of:
step 101, acquiring raw data of oblique photography.
The original data may include a plurality of original slices and texture images for rendering the original slices.
Wherein each original slice may include a plurality of vertices for dissecting the original slice.
In the embodiment of the present application, the raw data may be in OSGB (Open Scene Gragh Binary, OSG binary format) format.
Note that, the OSGB format uses a texture image in JPEG (Joint Photographic Experts Group ) format by default, and thus, in this application, the image format of the texture map used for rendering each original slice may be the JPEG format.
In the embodiment of the present application, the raw data of the oblique photography may be obtained, for example, by reading metadata of the raw data of the oblique photography, where the metadata may include a data name, a data format, and the like, which is not limited in the present application.
And 102, combining the root nodes of the plurality of original slices to obtain a combined slice.
In the embodiment of the application, root node merging can be performed on a plurality of original slices to obtain a merged slice.
For example, the top-level reconstruction process may be performed on the raw data, and merging of multiple raw slices may be performed to obtain a merged slice, i.e., generating coarse-grained data from fine-grained data.
Therefore, root nodes of a plurality of original slices are combined to obtain combined slices, the number of the slices can be reduced, and when a client loads data, the number of times of slice requests is reduced, so that the speed of data loading is improved.
And 103, performing vertex thinning on the vertices in the combined slice to obtain each target vertex after thinning in the combined slice, and performing image processing on the texture image of the rendered combined slice to obtain the target texture corresponding to each combined slice.
In the embodiment of the application, vertex thinning can be performed on vertices in the combined slice, so as to obtain each target vertex after thinning in the combined slice.
It will be appreciated that by thinning the vertices in the combined slice, fewer target vertices may be obtained to facilitate subsequent increases in data processing speed.
In the embodiment of the present application, the texture image of the rendered merged slice may also be subjected to image processing, so as to obtain the target texture corresponding to each merged slice.
As one possible implementation, the resolution of the texture image rendering the merged slice may be reduced, such as by employing a pixel area based scaling algorithm.
In order to achieve the reduction of the resolution of the texture image, in a possible implementation manner of the embodiment of the present application, the texture image of the rendered merging slice may be reduced in size according to a set scaling factor, where the value of the scaling factor is less than 1; and the pixel value of each pixel in the target texture can be determined by adopting a cubic spline interpolation function based on the reduced image size.
In this embodiment of the present application, the set scaling factor may be a preset image size reduction ratio, and the value of the scaling factor is smaller than 1, for example, the scaling factor may be 1/2, 1/3, etc., which is not limited in this application.
As an example, assuming that the scaling factor is r, the image size of the texture image is mxn, the pixel value of each pixel of the texture image is denoted as Img, the texture image of the rendered merged slice is scaled down by the set scaling factor r, and the scaled down image size is m '×n', where:
m'=m*r; (1)
n'=n*r; (2)
Based on the scaled down image size, a cubic spline interpolation function Resize () may be used to determine the pixel value Img' for each pixel in the target texture:
Img'=Resize(Img,m',n'); (3)
therefore, by reducing the resolution of the texture image for rendering the merged slice, the memory consumption and the bandwidth occupation of the texture image can be effectively reduced, and the running efficiency can be improved.
As yet another possible implementation, the texture image of the rendered merged slice may be image compressed to obtain the target texture for each merged slice.
In order to implement image compression of texture images of rendered merged slices, in a possible implementation manner of the embodiment of the present application, a set compression algorithm may be used to compress the texture images according to a set compression mode and a set picture quality factor.
In the embodiment of the present application, the compression mode set may be preset, for example, texture (Texture) compression.
In the embodiment of the present application, the set picture quality factor may be preset, for example, may be 75, 80, etc., which is not limited in this application.
In the embodiment of the present application, the compression algorithm is preset, for example, the compression algorithm is WebP.
As an example, assuming that a Texture image before compression is Texture, after a compression mode for Texture compression is selected, a Texture image is compressed by a set compression algorithm based on a set picture quality factor being quality, and the processed Texture image is Texture':
Texture'=WebP(Texture,quatility); (4)
it can be appreciated that by compressing the texture image for rendering the merged slice, the memory footprint of the oblique photography data can be effectively reduced, and the operation efficiency and stability can be improved.
And 104, generating oblique photographing target data based on the target vertexes of the merged slice and rendering the target textures of the merged slice so as to display the three-dimensional oblique photographing image on the client based on the target data.
In the embodiment of the application, the target data of oblique photography can be generated based on the target vertex of the merged slice and the target texture of the merged slice, so that the three-dimensional oblique photography image can be displayed on the client based on the generated target data.
According to the data processing method for oblique photography, original data of oblique photography are obtained, wherein the original data comprise a plurality of original slices and texture images for rendering the original slices, and each original slice comprises a plurality of vertexes for splitting the original slice; root node merging is carried out on a plurality of original slices to obtain merged slices; vertex thinning is carried out on the vertexes in the combined slice so as to obtain each target vertex after thinning in the combined slice, and image processing is carried out on the texture image of the rendered combined slice so as to obtain the target texture corresponding to each combined slice; and generating target data of oblique photography based on the target vertexes of the merged slice and rendering the target textures of the merged slice so as to display the three-dimensional oblique photography image on the client based on the target data. Therefore, on one hand, the number of slices is reduced, the number of times of requests for the slices during data loading is reduced, and on the other hand, the data volume of the data of the oblique photography is reduced, the video memory occupation of the data of the oblique photography is effectively reduced, and the loading performance and the loading efficiency are improved.
In order to clearly explain how to perform root node merging on a plurality of original slices to obtain a merged slice in the above embodiment of the present application, the present application further provides a data processing method of oblique photography.
Fig. 2 is a flow chart of a data processing method of oblique photography according to a second embodiment of the present application.
As shown in fig. 2, the data processing method of oblique photography may include the steps of:
in step 201, raw data of oblique photography is acquired.
The execution of step 201 may refer to the execution of any embodiment of the present application, which is not described herein.
Step 202, the raw data is read to reconstruct the LOD node model of each raw slice.
The LOD (level of Detail) node model comprises tree-shaped multi-layer nodes; the tree may include, among other things, quadtree, octree, etc., as not limited in this application.
Wherein, the vertexes contained in each layer of nodes can be used for rendering images with corresponding resolutions.
In the embodiment of the application, the original data can be read to reconstruct the LOD node model of each original slice.
In the case that the LOD node model organizes each node in the model by adopting a quadtree structure, as a possible implementation manner, it may be possible to recursively check from the top layer for the LOD node model whether the nodes in the hierarchy to be merged lack child nodes of the quadtree structure; in the event that there is a miss, the parent node may be copied as the missing child node.
Therefore, missing nodes in the LOD node model are supplemented, so that the accuracy of data is improved.
And 203, merging the sub-layer nodes in the LOD node model by adopting a merging function to obtain a merging root node in the target model, wherein a first vertex contained in the merging root node comprises vertices contained in all sub-layer nodes before merging.
In an embodiment of the present application, a child level node may indicate a child level node corresponding to a parent level node in the LOD node model.
In this embodiment of the present application, the first vertex may be a vertex contained in the merged root node, and the first vertex may include a vertex contained in each sub-layer node before being merged.
In the embodiment of the present application, the merging function may be used to merge root nodes, for example, the merging function is M ().
In the embodiment of the application, the sub-layer nodes in the LOD node model can be combined by adopting a combining function so as to obtain a combined root node in the target model.
For example, the (i+1) -th level root node in the LOD node model includes N (i+1)(4j-1) 、N (i+1)(4j) 、N (i+1)(4j+1) 、N (i+1)(4j+2) Generating node N in the ith level LOD node model by adopting merging function M () on root node of the hierarchy ij
N ij =M(N (i+1)(4j-1) ,N (i+1)(4j) ,N (i+1)(4j+1) ,N (i+1)(4j+2) ); (5)
Wherein i is the hierarchy, j is the i-th node sequence number.
And 204, determining the center point of the target model according to the peripheral boxes of the target model obtained by merging the vertexes contained in the root node.
In the embodiment of the application, the outer bounding box combining the vertices contained in the root node can be obtained, and the center point of the target model can be determined according to the outer bounding box.
For example, the bounding box may be calculated by using bounding box calculation function box () according to the coordinates of the vertices included in the merging root node, for example, the coordinates of the vertices before merging with the origin of the three-dimensional model scene as the origin are P respectively 1 ,P 2 ,…P i ,…,P n Wherein i is [1, n ]]And i is an integer, n is the number of vertexes, and the outer surrounding box for merging the vertexes contained in the root node is as follows:
[x min ,y min ,z min ,x max ,y max ,z max ]=box(P 1 ,P 2 ,…,P i ,…,P n ); (6)
wherein x is min Is the minimum value of X-axis direction, y min Is the minimum value in the Y-axis direction, z min Is the minimum value in the Y-axis direction, x max At the maximum value of Y axis direction, Y max Maximum in Y-axis direction, Z max Is the maximum value in the Z-axis direction.
Thereby, the center point C of the target model can be determined according to the outer bounding box m (x c ,y c ,z c ):
Figure SMS_1
Step 205, updating the vertex positions of the vertices contained in each node in the target model according to the center point of the target model.
In this embodiment of the present application, the vertex positions of vertices included in each node in the target model may be updated according to the center point of the target model.
For example, according to the scene origin of the three-dimensional model and the center point of the target model, an offset function is adopted to recalculate the vertex position of the vertex contained in each node in the target model relative to the center point of the target model:
P i ′=transform(Origin,C m ,p i ); (8)
wherein Origin represents the Origin of reference of coordinates of the model scene, P i C is the ith vertex coordinate with Origin as the Origin m As the center point of the target model, P i ' is the vertex position of the vertex contained in each node in the target model relative to the center point of the target model, i E [1, n ]]And i is an integer, and n is the number of vertexes.
At step 206, a merged slice is determined based on the vertex positions of the vertices contained in the object model.
In embodiments of the present application, the merged slice may be determined based on the vertex positions of the points contained in the target model.
Step 207, performing vertex thinning on the vertices in the merged slice to obtain each target vertex after thinning in the merged slice, and performing image processing on the texture image of the rendered merged slice to obtain the target texture corresponding to each merged slice.
In step 208, the target data of the oblique photography is generated based on the target vertices of the merged slice and the target textures of the merged slice are rendered, so as to display the three-dimensional oblique photography image on the client based on the target data.
The execution of steps 207 to 208 may refer to the execution of any embodiment of the present application, and will not be described herein.
According to the data processing method for oblique photography, original data are read to reconstruct LOD node models of original slices, wherein the LOD node models comprise tree-shaped multi-layer nodes, and vertexes contained in the nodes of each layer are used for rendering images with corresponding resolutions; merging sub-layer nodes in the LOD node model by adopting a merging function to obtain a merging root node in the target model, wherein a first vertex contained in the merging root node comprises vertices contained in all sub-layer nodes before merging; determining a center point of the target model according to the outer surrounding boxes of the vertexes contained in the merging root nodes; updating the vertex positions of the vertices contained in each node in the target model according to the center point of the target model; based on the vertex positions of the vertices contained in the object model, a merged slice is determined. Therefore, based on the reconstructed LOD node model, the combination of a plurality of original slices can be realized, and the combined slices can be obtained.
In order to clearly explain how to perform vertex thinning on vertices in the combined slice in the above embodiments of the present application, to obtain each target vertex after thinning in the combined slice, the present application further provides a data processing method of oblique photography.
Fig. 3 is a flow chart of a data processing method of oblique photography according to a third embodiment of the present application.
As shown in fig. 3, the data processing method of oblique photography may include the steps of:
in step 301, raw data of oblique photography is acquired.
And 302, performing root node merging on a plurality of original slices to obtain merged slices.
The execution of steps 301 to 302 may refer to the execution of any embodiment of the present application, and will not be described herein.
Step 303, vertex information is obtained for vertices in the combined slice, wherein the vertex information includes vertex position and vertex index data.
In embodiments of the present application, vertex information may include vertex position and vertex index data.
In the embodiment of the application, vertex information may be obtained for vertices in the merged slice.
And 304, thinning the vertexes in the combined slice based on vertex information by adopting a vertex clustering algorithm to obtain each target vertex after thinning in the combined slice.
In the embodiment of the application, a vertex clustering algorithm may be adopted to dilute vertices in the combined slice based on vertex information, so as to obtain each target vertex after being diluted in the combined slice.
For example, before thinning vertices in a merged slice, a plurality of irregular triangular patches are generated from vertices in the merged slice based on vertex information, the merged slice may be composed of the plurality of irregular triangular patches:
M=F(Δ 0 ,Δ 1 ,…,Δ i ,…,Δ m ); (9)
wherein F is a model building function, delta i For the ith triangular patch, i is E [1, m]And i is an integer, and the number of triangular patches is m+1.
The vertex in the combined slice can be thinned by adopting an Open3D vertex clustering algorithm based on a voxel grid, average or curved surface distance vertex convergence mode:
M′=Simplify(M); (10)
thus, each target vertex after thinning in the combined slice can be obtained.
In one possible implementation of the embodiments of the present application, the vertex information may further include texture coordinates for indicating a pixel point in the rendered image to render; the texture coordinates may be updated based on the vertex positions of the respective target vertices after thinning.
As an example, a UV reconstruction method may be used to generate texture coordinates consistent with the vertex positions of the target vertices according to the vertex positions of the thinned target vertices, so as to update the texture coordinates.
In step 305, the texture image of the rendered merged slice is subjected to image processing to obtain the target texture corresponding to each merged slice.
Step 306, generating oblique photography target data based on the target vertices of the merged slice and rendering the target texture of the merged slice, so as to display the three-dimensional oblique photography image on the client based on the target data.
The execution of steps 305 to 306 may refer to the execution of any embodiment of the present application, and will not be described herein.
In one possible implementation manner of the embodiment of the present application, in a case where rendering data is included in target data, generating a single primitive data block based on vertex information of at least two target vertices, wherein the vertex information includes one or more combinations of vertex coordinates, vertex indexes, texture coordinates, and normal vector structures; combining texture images according to at least two target textures required by rendering at least two target vertexes to obtain a single picture data block; updating texture coordinates in the single primitive data block based on the image size corresponding to the single picture data block and the image sizes of at least two target textures before texture image merging; and generating rendering data for indicating the GPU of the client to render according to the single primitive data block and the single picture data block.
In embodiments of the present application, vertex information may include one or more combinations of vertex coordinates, vertex indices, texture coordinates, and normal vector structures.
In an embodiment of the present application, a single primitive data block may be generated based on vertex information of at least two target vertices.
For example, at least two target vertices may beThe vertex information is correspondingly combined to generate a single primitive data block, for example, the vertex information B of the ith target vertex assuming that n target vertices are provided i The method comprises the following steps:
B i :V i +I i +UV i +N i ; (11)
wherein V is i Vertex coordinates for the ith target vertex, I i Vertex index for the ith target vertex, UV i Texture coordinates for the ith target vertex, N i Is the normal vector of the ith target vertex, i E [0, n ]]And i is an integer, and n is the number of target vertexes.
In the expression (11), the "+" is merely used to indicate that contents on both sides are combined, that is, the expression (11) indicates vertex information B of the i-th target vertex i From V i 、I i 、UV i 、N i The composition is formed.
A single primitive data block may be generated based on vertex information for n target vertices as:
Figure SMS_2
where "+" indicates that the contents of both sides are combined only.
In the embodiment of the application, texture image merging can be performed according to at least two target textures required by at least two target vertex rendering, so as to obtain a single picture data block.
For example, suppose that n target textures are required for rendering a target vertex, Q 1 ,Q 2 ,…Q i ,…,Q T Wherein i is [1, T ]]And i is an integer, T is the number of target textures, and n target textures are combined into a single picture data block:
[Q 1 +Q 2 +…+Q i +…+Q T ];
in the embodiment of the application, the texture coordinates in a single primitive data block may be updated based on the image size corresponding to the single picture data block and the image sizes of at least two target textures before the texture images are combined.
For example, assume that texture coordinates in a single primitive data block before texture image merging are:
(UV 10 ,UV 11 ,…UV 1i ,…,UV 20 ,UV 21 ,…UV 2i ,…,…,UV T0 ,UV T1 ,…,UV Ti ,…)
the number of target textures is T, and texture coordinates corresponding to the ith target texture before texture image combination are as follows after texture image combination:
UV′=combine(UV ij ,m,n,m i ,n i ); (12)
wherein UV ij The j-th texture coordinate representing the i-th target texture, and m×n representing the image size of a single image data block obtained by texture image combination of the target texture image, m i ×n i Representing the image size of the ith target texture image, UV ij ' UV representation ij New texture coordinates after texture image merging.
In one possible implementation of the embodiments of the present application, rendering data for instructing a GPU (Graphics Processing Unit, graphics processor) of a client to render may be generated from a single primitive data block and a single picture data block.
Thus, the rendering data may be compressed to reduce the computational resource overhead of the GPU.
In another possible implementation manner of the embodiment of the present application, the vertex information of the floating point data type in the single primitive data block may be further quantized to be converted into the vertex information of the integer data type.
For example, numerical data of vertex coordinates, texture coordinates and normal vectors of a single primitive data block are read, floating point data can be encoded into integer data by adopting a drago numerical encoding mode based on quantization bit number and compression level, for example, k floating point values v before the floating point data is encoded 1 ,v 2 ,…,v i ,…,v k The record is:
V=(v 1 ,v 2 ,…,v i ,…,v k ); (13)
encoding the numerical values of the vertex coordinates of the individual primitive data blocks with an encoding function E () to encode floating point data into integers:
V′=E(V,level,quantizati on); (14)
where level is the compression level, quantization is the number of quantization bits, and V' is the vertex coordinates of the encoded single primitive data block.
Therefore, the vertex information of the floating point data type in the single primitive data block is quantized to be converted into the vertex information of the integer data type, so that the computing capacity and the processing capacity of the CPU can be improved, and the running efficiency is improved.
According to the data processing method for oblique photography, vertex information is obtained through combining vertexes in a slice, wherein the vertex information comprises vertex positions and vertex index data; and adopting a vertex clustering algorithm to dilute the vertices in the combined slice based on vertex information so as to obtain each target vertex after being diluted in the combined slice. Therefore, vertex thinning can be carried out on the vertices in the combined slice, and each target vertex after thinning in the combined slice can be obtained.
The data processing method of oblique photography of the present application will be described in detail with reference to examples.
As an example, fig. 4 is a flow chart of a data processing method of oblique photography provided in the present application, where the data processing method of oblique photography may include five modules:
1) Root node merging module
1.1 A LOD node model is reconstructed.
Metadata information of original oblique photography data (denoted as original data in the present application) is read, the oblique photography data is restored, and an LOD node model of a plurality of original slices in the original oblique photography data is reconstructed by adopting a quadtree structure.
1.2 A) the supplemental model nodes.
A top level recursion check from the LOD node model checks whether there are child nodes missing the quadtree structure in the hierarchically intra-model data nodes to be merged, and if so, copies data supplements from parent nodes to child nodes.
1.3 A) merging root node.
And carrying out root node merging on the sub-layer nodes in the LOD node model by adopting a merging function to obtain a merged root node in the target model, wherein a first vertex contained in the merged root node comprises vertices contained in all sub-layer nodes before merging.
1.4 The vertex position of the vertex is recalculated.
Calculating a center point of the target model according to an outer surrounding box of the vertex of the target model, subtracting the center point of the three-dimensional scene from the center point of the target model, calculating an offset matrix of the target model, and recalculating the vertex positions of the vertices contained in each node of the target model based on the offset matrix.
a) And calculating the peripheral box of the target model by adopting a bounding box calculation function according to the vertex coordinates of the target model.
b) And calculating the center point of the target model according to the outer bounding box.
c) And (3) calculating coordinate position values of vertex coordinates in the LOD node model relative to the central point of the bounding box again by adopting an offset function according to the origin of the model scene and the central point of the target model, namely updating the vertex positions contained in each node in the LOD node model.
2) Vertex thinning module
2.1 The vertex data is read.
And reading vertex information from vertices in the combined slice, wherein the vertex information comprises vertex positions, vertex index data and texture coordinates indicating pixel points in the rendered image for rendering.
2.2 The vertex is thinned.
Based on a voxel grid, average or curved surface distance vertex convergence mode, an Open3D vertex clustering algorithm is adopted, and simplification and thinning processing is carried out based on vertex information, so that a geometric grid data model is generated.
And obtaining each target vertex after thinning in the combined slice through a vertex thinning function Simplify.
2.3 Reconstructing UV coordinate data, and generating UV coordinate data consistent with each target vertex by adopting a UV reconstruction method according to each diluted target vertex so as to update texture coordinates.
3) Texture compression module
3.1 A) reducing the texture resolution.
The texture data are read, a texture image is obtained, and a method based on pixel area reduction is adopted to reduce the texture image to a proper resolution; and (3) reducing the texture image of the original size pixel to r times (r < 1) of the texture image before reduction according to the set scaling factor, and recalculating the pixel value of each pixel in the texture image by adopting a cubic spline interpolation function.
3.2 Image compression).
And adopting a WebP compression algorithm to compress the texture image according to the compression mode and the picture quality factor.
4) Rendering data merging module
4.1 A single primitive data block is created.
And initializing vertex coordinates, vertex indexes, texture coordinates and normal vector structures of the primitive data blocks.
4.2 Vertex data block merging.
And traversing the vertex information of the target vertex, and combining the vertex coordinates, the vertex indexes, the texture UV coordinates and the normal vectors in the vertex information into a newly created single primitive data block.
4.3 Texture image merging.
Traversing the texture picture, merging the texture picture into a single picture data block.
4.4 UV coordinate data is reconstructed, UV coordinate data consistent with each target vertex is generated by a UV coordinate reconstruction method for a single picture data block, and data is added to the newly created single primitive data block.
5) Numerical value coding module
5.1 Numerical data of vertex information, that is, numerical data of vertex information in a single primitive data block is read, wherein the vertex information includes vertex coordinates, texture coordinates, normal vectors.
5.2 Numerical coding, namely, according to quantization bit numbers and compression levels, adopting a drago numerical coding mode to code vertex information of floating point data types into vertex information of integer data.
In summary, the data processing method of oblique photography of the present application may embody at least one of the following advantages:
1. by adopting the quadtree node merging method for the original oblique photographing data, the number of slice files can be reduced, so that the number of slice requests during data loading is reduced.
2. By recalculating the vertex position of the vertex, the problem of visual edge saw-tooth of the model in the Web front-end rendering process caused by the fact that the coordinate value of the vertex is greatly lost can be solved.
3. The performance optimization of oblique photographic data can be realized through the optimization flows of root node combination, vertex thinning, texture compression, rendering data combination and numerical coding.
The inventor finds that the processed data is changed into about 1/3 to 1/6 of the original data size through the processing of technologies such as node merging, vertex thinning, model compression, texture compression and the like, so that the problems of slow three-dimensional data loading, browsing blocking and easy collapse when the data volume is large are solved. The time for testing the loading and rendering of the single-screen data request in the local area network environment is reduced from the original time of more than 10 seconds to the time of less than 3 seconds, and the three-dimensional oblique photographing data loading and browsing interaction experience is greatly improved.
In summary, according to the data processing method for oblique photography, the step-by-step optimization processing from coarse granularity to fine granularity is realized on model data, the video memory occupation of the oblique photography data is effectively reduced by reducing the number of slices and the data volume of the oblique photography data, and the loading performance and the loading efficiency are improved.
Corresponding to the data processing method of oblique photography provided in the above embodiments, an embodiment of the present application further provides a data processing device of oblique photography. Since the data processing apparatus for oblique photography provided in the embodiment of the present application corresponds to the data processing method for oblique photography provided in the above-described several embodiments, the implementation of the data processing method for oblique photography in the embodiment is also applicable to the data processing apparatus for oblique photography provided in the embodiment, and will not be described in detail in the embodiment.
Fig. 5 is a schematic structural diagram of a data processing apparatus for oblique photography according to a seventh embodiment of the present application.
As shown in fig. 5, the oblique photography data processing apparatus 500 may include: the system comprises an acquisition module 501, a first combination module 502, an thinning module 503, a first processing module 504 and a first generation module 505..
The acquiring module 501 is configured to acquire raw data of oblique photography, where the raw data includes a plurality of raw slices and texture images for rendering each raw slice, and each raw slice includes a plurality of vertices for splitting the raw slice.
The first merging module 502 is configured to perform root node merging on the plurality of original slices to obtain a merged slice.
And the thinning module 503 is configured to perform vertex thinning on vertices in the merged slice, so as to obtain each target vertex after thinning in the merged slice.
A first processing module 504, configured to perform image processing on the texture image of the rendered merged slice to obtain a target texture corresponding to each merged slice;
the first generation module 505 is configured to generate target data of the oblique photography based on the target vertices of the merged slice and the target textures of the merged slice, so as to display a three-dimensional oblique photography image on the client based on the target data.
In one possible implementation manner of the embodiment of the present application, the first merging module 502 is configured to: reading original data to reconstruct an LOD node model of each original slice, wherein the LOD node model comprises tree-shaped multi-layer nodes, and vertexes contained in the nodes of each layer are used for rendering images with corresponding resolutions; combining sub-layer nodes in the LOD node model by adopting a combining function to obtain a combined root node in the target model, wherein a first vertex contained in the combined root node comprises vertices contained in all sub-layer nodes before combination; determining a center point of the target model according to an outer surrounding box of the target model obtained by combining vertexes contained in the root node; updating the vertex positions of the vertices contained in each node in the target model according to the center point of the target model; based on the vertex positions of the vertices contained in the object model, a merged slice is determined.
In one possible implementation manner of the embodiment of the present application, the LOD node model adopts a quadtree structure, and the data processing apparatus 500 for oblique photography may further include:
and the checking module is used for recursively checking whether the nodes in the hierarchy to be combined lack child nodes of the quadtree structure from the top layer for the LOD node model.
And the copy module is used for copying the father node as the missing child node under the condition that the missing exists.
In one possible implementation manner of the embodiment of the present application, the thinning module 503 is configured to: obtaining vertex information of vertexes in the combined slice, wherein the vertex information comprises vertex positions and vertex index data; and adopting a vertex clustering algorithm to dilute the vertices in the combined slice based on vertex information so as to obtain each target vertex after being diluted in the combined slice.
In one possible implementation manner of the embodiment of the present application, the vertex information further includes texture coordinates for indicating a pixel point used for rendering in the rendered image; the method further comprises the steps of:
and the first updating module is used for updating texture coordinates according to the vertex positions of the thinned target vertices.
In one possible implementation manner of the embodiment of the present application, the first processing module 504 is configured to: and reducing the resolution of the texture image of the rendered merged slice, and/or performing image compression on the texture image of the rendered merged slice to obtain the target texture corresponding to each merged slice.
In one possible implementation manner of the embodiment of the present application, the first processing module 504 is configured to: reducing the size of the texture image of the rendered combined slice according to a set scaling factor, wherein the value of the scaling factor is smaller than 1; and determining the pixel value of each pixel in the target texture by adopting a cubic spline interpolation function based on the reduced image size.
In one possible implementation manner of the embodiment of the present application, the first processing module 504 is configured to: and compressing the texture image by adopting a set compression algorithm according to the set compression mode and the set picture quality factor.
In one possible implementation manner of the embodiment of the present application, the target data includes rendering data; the data processing apparatus 500 for still another oblique photography may further include:
and a second generation module for generating a single primitive data block based on vertex information of at least two target vertices, wherein the vertex information comprises one or more combinations of vertex coordinates, vertex indices, texture coordinates, and normal vector structures.
And the second merging module is used for merging texture images according to at least two target textures required by rendering at least two target vertexes so as to obtain a single picture data block.
And the second updating module is used for updating the texture coordinates in the single primitive data block based on the image size corresponding to the single picture data block and the image sizes of at least two target textures before the texture images are combined.
And the third generation module is used for generating rendering data for indicating the GPU of the client to render according to the single primitive data block and the single picture data block.
In one possible implementation manner of the embodiment of the present application, the data processing apparatus 500 for oblique photography may further include:
and the second processing module is used for carrying out quantization processing on the vertex information of the floating point data type in the single primitive data block so as to convert the vertex information into the vertex information of the integer data type.
According to the data processing device for oblique photography, original data of oblique photography are obtained, wherein the original data comprise a plurality of original slices and texture images for rendering the original slices, and each original slice comprises a plurality of vertexes for splitting the original slice; root node merging is carried out on a plurality of original slices to obtain merged slices; vertex thinning is carried out on the vertexes in the combined slice so as to obtain each target vertex after thinning in the combined slice, and image processing is carried out on the texture image of the rendered combined slice so as to obtain the target texture corresponding to each combined slice; and generating target data of oblique photography based on the target vertexes of the merged slice and rendering the target textures of the merged slice so as to display the three-dimensional oblique photography image on the client based on the target data. Therefore, by reducing the number of slices and the data volume of the data of the oblique photography, the video memory occupation of the data of the oblique photography is effectively reduced, and the loading performance and the loading efficiency are improved.
In order to implement the foregoing embodiments, the present application further provides an electronic device, and fig. 6 is a schematic structural diagram of the electronic device provided in the eighth embodiment of the present application. The electronic device includes:
a memory 601, a processor 602, and a computer program stored on the memory 601 and executable on the processor 602.
The processor 602 implements the data processing method of oblique photography provided in the above-described embodiment when executing the program.
Further, the electronic device further includes:
a communication interface 603 for communication between the memory 601 and the processor 602.
A memory 601 for storing a computer program executable on the processor 602.
The memory 601 may comprise a high-speed RAM memory or may further comprise a non-volatile memory (non-volatile memory), such as at least one disk memory.
And a processor 602 for implementing the data processing method of oblique photography described in the above embodiment when executing the program.
If the memory 601, the processor 602, and the communication interface 603 are implemented independently, the communication interface 603, the memory 601, and the processor 602 may be connected to each other through a bus and perform communication with each other. The bus may be an industry standard architecture (Industry Standard Architecture, abbreviated ISA) bus, an external device interconnect (Peripheral Component, abbreviated PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, abbreviated EISA) bus, among others. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one thick line is shown in fig. 6, but not only one bus or one type of bus.
Alternatively, in a specific implementation, if the memory 601, the processor 602, and the communication interface 603 are integrated on a chip, the memory 601, the processor 602, and the communication interface 603 may perform communication with each other through internal interfaces.
The processor 602 may be a central processing unit (Central Processing Unit, abbreviated as CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, abbreviated as ASIC), or one or more integrated circuits configured to implement embodiments of the present application.
In order to achieve the above-described embodiments, the present embodiments also propose a non-transitory computer-readable storage medium on which a computer program is stored, which when executed by a processor, implements a data processing method of oblique photography as provided in the above-described embodiments.
In order to achieve the above embodiments, the present application further proposes a computer program product which, when executed by an instruction processor in the computer program product, implements the data processing method of oblique photography provided in the above embodiments.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" is at least two, such as two, three, etc., unless explicitly defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. As with the other embodiments, if implemented in hardware, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like. Although embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (10)

1. A data processing method of oblique photography, characterized by comprising the steps of:
acquiring raw data of oblique photography, wherein the raw data comprises a plurality of raw slices and texture images for rendering the raw slices, and each raw slice comprises a plurality of vertexes for splitting the raw slice;
merging the plurality of original slices to obtain a merged slice;
performing vertex thinning on the vertexes in the merged slice to obtain each target vertex after thinning in the merged slice, and performing image processing on a texture image rendering the merged slice to obtain a target texture corresponding to each merged slice;
and generating target data of the oblique photography based on the target vertexes of the combined slices and rendering target textures of the combined slices so as to display three-dimensional oblique photography images on a client based on the target data.
2. The method of claim 1, wherein the merging the plurality of original slices into a merged slice comprises:
reading the original data to reconstruct an LOD node model of each original slice, wherein the LOD node model comprises tree-shaped multi-layer nodes, and vertexes contained in the nodes of each layer are used for rendering images with corresponding resolutions;
combining sub-layer nodes in the LOD node model by adopting a combining function to obtain a combined root node in the target model, wherein a first vertex contained in the combined root node comprises vertices contained in all sub-layer nodes before combination;
determining a center point of the target model according to an outer surrounding box of the target model obtained by combining vertexes contained in the root node;
updating the vertex positions of the vertices contained in each node in the target model according to the center point of the target model;
the merged slice is determined based on vertex positions of vertices contained in the target model.
3. The method of claim 2, wherein the LOD node model employs a quadtree structure, the method further comprising:
recursively checking from the top layer whether the nodes in the hierarchy to be combined lack child nodes of the quadtree structure for the LOD node model;
In the event that there is a miss, the parent node is copied as the child node of the miss.
4. The method of claim 1, wherein performing vertex thinning on vertices in the merged slice to obtain thinned target vertices in the merged slice comprises:
obtaining vertex information for the vertices in the merged slice, wherein the vertex information comprises vertex position and vertex index data;
and thinning the vertexes in the combined slice based on the vertex information by adopting a vertex clustering algorithm to obtain each target vertex after thinning in the combined slice.
5. The method of claim 4, wherein the vertex information further comprises texture coordinates for indicating pixels in the rendered image to render; the method further comprises the steps of:
and updating the texture coordinates according to the vertex positions of the thinned target vertices.
6. The method of claim 1, wherein the performing image processing on the texture image rendering the merged slice to obtain the target texture corresponding to each merged slice comprises:
and reducing the resolution of the texture image rendering the combined slice, and/or performing image compression on the texture image rendering the combined slice to obtain the target texture corresponding to each combined slice.
7. The method of claim 6, wherein the reducing the resolution of the texture image rendering the merged slice comprises:
reducing the size of the texture image rendering the combined slice according to a set scaling factor, wherein the value of the scaling factor is smaller than 1;
and determining the pixel value of each pixel in the target texture by adopting a cubic spline interpolation function based on the reduced image size.
8. The method of claim 6, wherein image compressing the texture image rendering the merged slice comprises:
and according to the set compression mode and the set picture quality factor, adopting a set compression algorithm to compress the texture image.
9. The method according to any one of claims 1-8, wherein the target data comprises rendering data; the method further comprises the steps of:
generating a single primitive data block based on vertex information for at least two target vertices, wherein the vertex information comprises one or more combinations of vertex coordinates, vertex indices, texture coordinates, and normal vector structures;
combining texture images according to at least two target textures required by rendering of the at least two target vertexes so as to obtain a single picture data block;
Updating texture coordinates in the single primitive data block based on the image size corresponding to the single picture data block and the image sizes of at least two target textures before texture image merging;
and generating rendering data for indicating the GPU of the client to render according to the single primitive data block and the single picture data block.
10. The method according to claim 9, wherein the method further comprises:
and carrying out quantization processing on the vertex information of the floating point data type in the single primitive data block so as to convert the vertex information into the vertex information of the integer data type.
CN202211728257.XA 2022-12-29 2022-12-29 Data processing method for oblique photography Pending CN116129019A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211728257.XA CN116129019A (en) 2022-12-29 2022-12-29 Data processing method for oblique photography

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211728257.XA CN116129019A (en) 2022-12-29 2022-12-29 Data processing method for oblique photography

Publications (1)

Publication Number Publication Date
CN116129019A true CN116129019A (en) 2023-05-16

Family

ID=86303988

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211728257.XA Pending CN116129019A (en) 2022-12-29 2022-12-29 Data processing method for oblique photography

Country Status (1)

Country Link
CN (1) CN116129019A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117710597A (en) * 2023-12-19 2024-03-15 中铁一局集团市政环保工程有限公司 Three-dimensional modeling method and system based on oblique photographic data and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117710597A (en) * 2023-12-19 2024-03-15 中铁一局集团市政环保工程有限公司 Three-dimensional modeling method and system based on oblique photographic data and electronic equipment

Similar Documents

Publication Publication Date Title
Kopf et al. One shot 3d photography
CN107730503B (en) Image object component level semantic segmentation method and device embedded with three-dimensional features
JP4083238B2 (en) Progressive mesh adaptive subdivision method and apparatus
JP4237806B2 (en) Progressive mesh adaptive subdivision method and apparatus
Fabio From point cloud to surface: the modeling and visualization problem
Remondino From point cloud to surface: the modeling and visualization problem
Gobbetti et al. C‐BDAM–compressed batched dynamic adaptive meshes for terrain rendering
US8725466B2 (en) System and method for hybrid solid and surface modeling for computer-aided design environments
US20080238919A1 (en) System and method for rendering of texel imagery
US11600044B2 (en) Rendering textures utilizing sharpness maps
US11842443B2 (en) Rendering three-dimensional objects utilizing sharp tessellation
CN116129019A (en) Data processing method for oblique photography
JP6689269B2 (en) Method for compressing and decompressing data representing a digital three-dimensional object, and information recording medium for recording information including the data
JP7383171B2 (en) Method and apparatus for point cloud coding
US11948338B1 (en) 3D volumetric content encoding using 2D videos and simplified 3D meshes
JP2005332028A (en) Method and apparatus for generating three-dimensional graphic data, generating texture image, and coding and decoding multi-dimensional data, and program therefor
Mahdavi-Amiri et al. Data management possibilities for aperture 3 hexagonal discrete global grid systems
Bertolotto Progressive techniques for efficient vector map data transmission: An overview
KR100400608B1 (en) Encoding method for 3-dimensional voxel model by using skeletons
Jiang et al. A large-scale scene display system based on webgl
US11875435B2 (en) Generating scalable fonts utilizing multi-implicit neural font representations
Wu et al. Unitypic: Unity point-cloud interactive core
US20240104783A1 (en) Multiple attribute maps merging
US20230306643A1 (en) Mesh patch simplification
WO2024021089A1 (en) Encoding method, decoding method, code stream, encoder, decoder and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination