CN110910496B - VR natural environment automatic construction method based on big data and AI - Google Patents

VR natural environment automatic construction method based on big data and AI Download PDF

Info

Publication number
CN110910496B
CN110910496B CN201911070540.6A CN201911070540A CN110910496B CN 110910496 B CN110910496 B CN 110910496B CN 201911070540 A CN201911070540 A CN 201911070540A CN 110910496 B CN110910496 B CN 110910496B
Authority
CN
China
Prior art keywords
image
model
point
environment
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911070540.6A
Other languages
Chinese (zh)
Other versions
CN110910496A (en
Inventor
夏磊
尤海宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Chengfang Intelligent Technology Co ltd
Original Assignee
Anhui Chengfang Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Chengfang Intelligent Technology Co ltd filed Critical Anhui Chengfang Intelligent Technology Co ltd
Priority to CN201911070540.6A priority Critical patent/CN110910496B/en
Publication of CN110910496A publication Critical patent/CN110910496A/en
Application granted granted Critical
Publication of CN110910496B publication Critical patent/CN110910496B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a VR natural environment automatic construction method based on big data and AI, and relates to the technical field of data processing. The invention includes: the method comprises the steps of building a terrain model, generating a three-dimensional terrain model and generating an environment model based on deep learning. According to the method, a model of a natural environment is rapidly generated by combining high-definition images acquired by satellites in real time through terrain data acquired by satellite remote sensing, and the environment is displayed in real time in a VR (virtual reality) environment; the workload of manual modeling is saved, the environment model is generated through big data and AI intelligence, the real environment is restored more accurately and more precisely, and the cost of manpower and time is greatly reduced.

Description

VR natural environment automatic construction method based on big data and AI
Technical Field
The invention belongs to the technical field of data processing, and particularly relates to a VR natural environment automatic construction method based on big data and AI.
Background
The terrain model construction technology can restore the terrain, the mountain body and the altitude in the map, and has important application in geological research and geographical investigation. The method is also the key development field of the current map navigation software. Meanwhile, the method has important application in the field of military digital operation, and can be said to be a technology with high practical value. Terrain generation techniques have been applied in this field, for example, to *** earth software. And simple natural environment construction techniques applied in some local areas of the united states. A careful model is built for some natural and environmental high-rise buildings. The user can travel to the famous site ancient Egypt pyramid, french Luo-Pong, paris-holy house and Beijing purple-Jing city of the world in the software and feel in the scene only by one computer. Of course, this environment is limited to a local area, because it takes a lot of time to build a model and to examine it in the field, and there is no practical value other than the application to entertainment.
The natural environment composition is a new technology, and is a further technology on the aspect of terrain composition technology, with the development of aerospace industry in China, the ground observation obtains more and more ground object information, and the resolution of images obtained by satellite images, aerial images and the like is higher and higher. The high-resolution remote sensing image can obtain the ground features with the size of 1m, which also prompts us to achieve richer environment construction on the basis of terrain construction.
The VR natural environment automatic construction method based on big data and AI, which is researched by us, solves the difficulty of natural environment construction on the basis of terrain construction; by utilizing big data and AI technology, the large-range natural environment composition is realized, the intelligent environment composition is realized, and a map model, forest vegetation, mountain bodies, water areas, street buildings and the like can be quickly and accurately generated without field investigation and manual modeling.
Disclosure of Invention
The invention aims to provide a VR natural environment automatic composition method based on big data and AI, which is characterized in that a model of a natural environment is rapidly generated by combining high-definition images acquired by a satellite in real time through terrain data acquired by satellite remote sensing, and the environment is displayed in real time in a VR environment; the workload of manual modeling is saved, the environment model is generated through big data and AI intelligence, the real environment is restored more accurately and more finely, and the cost of manpower and time is greatly reduced.
In order to solve the technical problems, the invention is realized by the following technical scheme:
the invention relates to a VR natural environment automatic composition method based on big data and AI, comprising the following steps: the method comprises the steps of constructing a terrain model, generating a three-dimensional terrain model and generating an environment model based on deep learning;
the terrain model building method comprises the following processes:
a00: elevation line map processing: namely, the observation image is represented by an image matrix F (x, y);
a01: contour image acquisition and preprocessing: removing the noise of the observation image and irrelevant information to obtain a contour image F (x, y);
a02: binarization processing of the contour image F (x, y): determining a binarization threshold value, and converting the contour image F (x, y) into an image only with foreground color and background color;
a03: skeletonizing contour lines of the contour line image F (x, y) by adopting a layer-by-layer peeling thinning algorithm;
a04: the depth values mark skeleton points: traversing the middle points of the contour line image F (x, y), and judging whether the current point is a skeleton point or not by comparing the depth values of the current point and the adjacent points;
a05: stripping from the outside to the inside according to the contour depth value mark;
a06: and (3) peer-to-peer high line breakpoint connection: connecting the high lines at break points in a mode of combining machine connection and manual connection;
a07: tracing the contour lines and marking elevation values;
the generation of the three-dimensional terrain model comprises the following processes:
a08: generating an elevation terrain data file;
a09: generating a three-dimensional terrain model;
the generating of the environment model based on deep learning specifically includes the following steps:
a10: establishing a deep learning natural environment model database;
a11: establishing a convolutional neural network;
a12: establishing a natural environment recognition training model;
a13: acquiring a high-resolution remote sensing environment image, and carrying out preprocessing such as noise filtering and smoothing on influence to obtain a preprocessing top view;
a14: automatically recognizing the preprocessed top view by adopting an environment model in a model library, wherein the automatic recognition is obtained by convolutional neural network training;
a15: and generating an environment model according to the identification result.
Preferably, the layer-by-layer peeling refinement algorithm specifically comprises the following processes: firstly, finding a pixel on the edge of a line-marked image; the pixel is taken as the center, the gray values of 8 neighborhoods of the pixel are detected according to a certain sequence, whether the pixel at the center is set to be 0 or not is determined, and the edge pixel adjacent to the pixel at the center is found out to continuously track peeling.
Preferably, the limiting conditions of the layer-by-layer peeling refinement algorithm are as follows: the end points of the line segment are not eliminated, the originally connected points are not interrupted, and the area is not excessively etched.
Preferably, the machine connection comprises the following: traversing the contour line image to obtain a breakpoint, and seeking another breakpoint by using a window operator with the breakpoint as a center; connecting the two break points after clearing the other break point;
wherein, the limit condition of the another breakpoint is as follows: the other adjacent area points of the two break points are not in the same direction; the two break points are located on the same side of any contour line.
Preferably, a07 specifically includes the following processes:
b00: determining a starting point of a search line segment and using the starting point as a current point;
b01: taking the current point as a center, and searching a next untracked point according to 8 directions of northwest, north, northeast, east, southeast, south, southwest and west; if there are no points, exit; if there is some point, recording its coordinate and searching direction, and determining the searching direction of the next point;
b02: b01 is executed by taking the newly found point as a new discrimination center according to the searching direction determined by B01;
b03: all points on the line segment are traced or the other end of the tracing is terminated.
Preferably, a09 specifically includes the following:
c00: selecting import from 3DMAX software, and then selecting the DED file generated in the previous step in a box of a terrain file selection pair walk on tiptoe;
c01: reasonably setting control parameters of the model file in an ImportTerrain dialog box;
c02: previewing the terrain to be generated, and clicking to determine to generate a three-dimensional terrain model.
The invention has the following beneficial effects:
1. according to the method, a model of a natural environment is rapidly generated by combining high-definition images acquired by satellites in real time through terrain data acquired by satellite remote sensing, and the environment is displayed in real time in a VR (virtual reality) environment; the workload of manual modeling is saved, the real environment is restored more accurately and more finely by intelligently generating an environment model through big data and AI, and the cost of manpower and time is greatly reduced;
2. the invention is formed by a large-scale intelligent model, and the terrain formation is promoted to be the environment formation; the ground information is acquired through remote sensing, and then the environment model is rapidly generated, so that the geographic research, the geological investigation and even the seismic survey field become more convenient, and the method has time-effect information;
3. the real-time VR large-scale three-dimensional visual map environment is provided for the military, and the VR virtual simulation training effect of the military is obviously improved; the real-time monitoring of global environment resource change is carried out on forests, wetlands, freshwater lakes, volcanoes and the like, and the real-time monitoring is matched with VR equipment to observe any satellite-covered place in an on-the-spot manner at any time.
Of course, it is not necessary for any product in which the invention is practiced to achieve all of the above-described advantages at the same time.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a VR natural environment automatic composition method based on big data and AI according to the invention;
FIG. 2 is a schematic diagram of a three-dimensional terrain model according to the present invention;
FIG. 3 is a schematic diagram of a two-dimensional convolution according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the present invention is a VR natural environment automatic configuration method based on big data and AI, including: the method comprises the steps of constructing a terrain model, generating a three-dimensional terrain model and generating an environment model based on deep learning;
the terrain model construction comprises the following processes:
a00: elevation line map processing: namely, the observation image is represented by an image matrix F (x, y);
wherein, the observation image is obtained by observing the objective world by different means by various observation systems, and is three-dimensional in the objective world environment; and the observation image is two-dimensional; on a computer, an image can be represented by a two-dimensional array, such as F (x, y), which is regarded as an image matrix F; each element in the matrix corresponds to an image point in the image, and the element value of each matrix corresponds to the gray level of the pixel point; all pixels of the image have black and white gray levels, and the image is called a binary image; the other parts are gray level images and color images, and the original storage format of the discrete digital image is a raster data format;
a01: contour image acquisition and preprocessing: removing the noise of the observed image and irrelevant information to obtain a contour image F (x, y); the irrelevant information comprises longitude and latitude lines, mark points and the like;
a02: binarization processing of the contour image F (x, y): determining a binarization threshold value, and converting the contour image F (x, y) into an image only with foreground color and background color;
the image gray level statistical histogram is a 1-D discrete function and gives the integral gray level description of an image; the function is described as follows:
P(sk)=nk/n k=0,1,2-----L-1;
in the above formula, sk is the k-th gray scale value of the image F (x, y), nk is the number of pixels in the image having this gray scale value, and n is the total number of pixels in the image; the histogram can be used for image enhancement or coding, wherein image enhancement can highlight each image foreground to make the interested elements more obvious;
the image binarization is that after the image binarization threshold value is determined, the image is converted into an image with only two colors, namely foreground color (1) and background color (0), the key of binarization is better setting the threshold value, if the threshold value is not set well, some foreground images needing to be reserved are filtered, and information loss is caused;
a03: skeletonizing contour lines of the contour line image F (x, y) by adopting a layer-by-layer peeling thinning algorithm;
wherein, in the contour line image obtained by scanning, the contour line is not generally of a single-pixel width, is generally 4-5 pixels wide and is uneven in thickness; if contour line tracking is directly carried out, a large number of pixels on the line can reduce the speed of contour line tracking, and because the tracked pixels are not necessarily the central pixels of the contour line, errors are introduced in the tracking process, which has great influence on the accuracy of the final result; before tracking, contour line skeletons must be extracted, namely, the image refinement work; we adopt a layer-by-layer peeling refinement algorithm: firstly, finding a pixel positioned on the edge of a line-marked image, then detecting the gray values of 8 neighborhoods of the line-marked image according to a certain sequence by taking the pixel as a center so as to determine whether the center pixel is set to be 0 or not, and finding an edge pixel adjacent to the center pixel so as to continuously track peeling; the algorithm has the advantages that only the edge pixel is processed, so the thinning efficiency is high; in this process, the following three constraints are guaranteed to be met: the end points of the line segment, the originally connected points and the etched area are not eliminated;
a04: the depth values mark skeleton points: traversing the middle points of the contour line image F (x, y), and judging whether the current point is a skeleton point or not by comparing the depth values of the current point and the adjacent points;
wherein, since our peeling algorithm uses the method from the edge from the outside to the inside, it is very useful to control the peeling process to adopt a structure capable of reflecting the edge attribute of the line, and for this reason, the concept of neighbor and depth is introduced, and 8 points arranged clockwise around the center of a certain point are marked as follows:
where point 0,1,2,3,4,5.6,7 is referred to as a neighborhood of 8 points a and point 1,3,5,7 is referred to as a neighborhood of 4 points a; when 8 neighbors of a point are all the point l, i.e. the point is a foreground point of the current binary image, we call the point have a depth value of 2, also called the distance from the point to the boundary is 2, and when 8 neighbors of the point A are all the current value of 2, the depth value of the point A is 3, which is shown as follows: according to the principle, after traversing the image array, the depth value of any point is calculated, the point with the maximum depth value is a skeleton point, and the rest points are non-skeleton points;
a05: stripping from the outside to the inside according to the contour depth value mark;
since the image data source is obtained from a map printing medium, the width of the skeleton contour often occupies several pixel widths, which not only increases the processing workload, but also causes errors, so that the boundary points of the contour must be stripped off and only the central skeleton of the contour is reserved, and the processed contour occupies only one pixel width. Stripping from the outside to the inside according to the depth value marks of the contour lines;
a06: and (3) peer-to-peer high line breakpoint connection:
due to the reasons that the map printing quality and the image scanning quality are limited, the contour lines which are to be closed possibly show a broken state after the operation, and need to be connected; for this, a means combining machine connection and manual connection is adopted; for machine connection, firstly, traversing the image, inquiring a breakpoint, and then taking the breakpoint as a central point; sequentially adopting window operators such as 5 multiplied by 5,7 multiplied by 7,9 multiplied by 9 and the like, seeking another breakpoint in the operator region, clearing the breakpoint mark and connecting the two breakpoints after finding the other breakpoint; another breakpoint should typically satisfy the following constraint: l, the other neighborhood point of the two break points is not in the same direction; 2. the two break points should be located on the same side of any contour line and should not be located on different sides. When the above conditions are ensured to be fully ensured; the size of the operator can be enlarged so as to automatically connect more breakpoints;
a07: tracing the contour lines and marking elevation values;
here, the principle and operation steps of the line segment tracing algorithm:
(1) Searching and determining the starting point of the line segment, and recording the coordinates (x, Y) of the starting point;
(2) Taking the determined point as a center, and searching a next untracked point according to 8 directions of northwest, north, northeast, east, southeast, south, southwest and west; if there are no points, exit; if yes, recording the coordinate of the point, recording the searching direction of the point and determining the searching direction of the next point;
(3) And (5) according to the searching direction determined in the last step, taking the newly found point as a new discrimination center, and turning to the operation (2) to cycle in sequence until tracking to the other end point.
(4) All points on a line segment are automatically tracked, and because the end modes of tracking a closed curve and tracking the line segment are different, a concept of area boundary tracking is introduced, and a line with a unit width is regarded as an area; therefore, the line boundary is overlapped with the line, so that the tracing of the lines with two attributes has a consistent ending mode, but the tracing of the closed curve only needs to be traversed once, and the line segment needs to be traversed twice;
generating the three-dimensional terrain model comprises the following processes:
a08: generating an elevation terrain data file;
the method mainly comprises the steps of calculating the elevation values of N multiplied by M (N rows and M columns) uniformly distributed points in a given map range, wherein the calculation of NxM points is a cyclic process, and the generation of the regular digital ground model is summarized as an elevation algorithm for calculating one point;
a09: generating a three-dimensional terrain model;
as shown in fig. 2, 3D MAX is a more ideal solid modeling tool, and particularly, can conveniently import a terrain data file to generate a terrain model file of openflight standard, where a 0penFlight file can be directly applied to some view management systems; the following is the step of generating a model file by means of 3D MAX software:
c00: selecting import from 3DMAX software, and then selecting the DED file generated in the previous step in a box of a terrain file selection pair walk on tiptoe;
c01: reasonably setting control parameters of the model file in an ImportTertain dialog box;
c02: previewing a terrain to be generated, and clicking to determine to generate a three-dimensional terrain model;
the method for generating the environment model based on deep learning specifically comprises the following steps:
a10: establishing a deep learning natural environment model database;
after the terrain model is generated, a model in the environment needs to be generated on the terrain, different objects corresponding to color patches with different gray values on the two-dimensional satellite image are identified through a deep learning image identification method, and the corresponding models are matched; deep learning utilizes a computer to simulate human learning behaviors, obtain new knowledge skills, reorganize the existing knowledge structure and continuously optimize a knowledge base, and finally make an optimal decision; the natural environment model base is the foundation of deep learning, and the model database can comprise various forests, lakes, buildings, marshlands and traffic roads;
a11: establishing a convolutional neural network;
referring to fig. 3, convolution is a common mathematical operation, and the most applied field is signal analysis and automatic control direction; the following is a mathematical definition of the one-dimensional continuous signal convolution operation:
s(t)=∫x(a)w(t-a)da
the convolution operation for a discrete signal is:
Figure BDA0002260812340000101
in machine learning applications, the input is typically a high dimensional array of data, also known as a tensor. When the variable is two-dimensional, the discrete convolution is extended to:
Figure BDA0002260812340000102
wherein 0,1,2, ·, N-1;
a12: establishing a natural environment recognition training model;
the method comprises the steps of deep learning image recognition, wherein a small area is randomly selected from an image by utilizing a convolutional neural network to serve as a training sample, characteristics of some specific information are learned from the sample, the characteristics serve as a filter to be operated with an original image, so that activation values of different characteristics at any position in the original image are obtained, the activation values are input into a classifier to be trained, image classification can be achieved, and finally classification accuracy can be improved through modes of communicating the area, filtering noise and the like, so that ground information is recognized; the high-resolution image natural environment extraction method is to utilize a deep learning supervision classification method to adjust all the convolution interlayer learning characteristic parameters so as to furthest learn correct natural environment model characteristics;
a13: acquiring a high-resolution remote sensing environment image, and carrying out preprocessing such as noise filtering and smoothing on influence to obtain a preprocessing top view;
a14: automatically recognizing the preprocessed top view by adopting an environment model in a model library, wherein the automatic recognition is obtained by convolutional neural network training;
a15: generating an environment model according to the recognition result;
the system can identify the natural environment in the high-resolution satellite remote sensing image, and a natural environment model is generated in the terrain model according to the output information and the model matching information in the model base. And finally obtaining a high-precision natural environment map model.
It should be noted that, in the above system embodiment, each included unit is only divided according to functional logic, but is not limited to the above division as long as the corresponding function can be implemented; in addition, the specific names of the functional units are only for the convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
In addition, those skilled in the art can understand that all or part of the steps in the method for implementing the embodiments described above can be implemented by a program to instruct related hardware, and the corresponding program can be stored in a computer readable storage medium.
The preferred embodiments of the invention disclosed above are intended to be illustrative only. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise embodiments disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best understand the invention for and utilize the invention. The invention is limited only by the claims and their full scope and equivalents.

Claims (6)

1. A VR natural environment automatic construction method based on big data and AI is characterized by comprising the following steps: the method comprises the steps of constructing a terrain model, generating a three-dimensional terrain model and generating an environment model based on deep learning;
the terrain model building method comprises the following processes:
a00: elevation line map processing: i.e. the observation image is represented by an image matrix F (x, y);
a01: contour image acquisition and preprocessing: removing the noise of the observation image and irrelevant information to obtain a contour image F (x, y);
a02: binarization processing of the contour image F (x, y): determining a binarization threshold value, and converting the contour image F (x, y) into an image only with foreground color and background color;
a03: skeletonizing contour lines of the contour line image F (x, y) by adopting a layer-by-layer peeling thinning algorithm;
a04: the depth values mark skeleton points: traversing the middle points of the contour line image F (x, y), and judging whether the current point is a skeleton point or not by comparing the depth values of the current point and the adjacent points;
a05: stripping from the outside to the inside according to the contour depth value mark;
a06: and (3) peer-to-peer high line breakpoint connection: connecting the high-speed lines at break points in a mode of combining machine connection and manual connection;
a07: tracing the contour lines and marking elevation values;
the generation of the three-dimensional terrain model comprises the following processes:
a08: generating an elevation terrain data file;
a09: generating a three-dimensional terrain model;
the generating of the environment model based on deep learning specifically includes the following steps:
a10: establishing a deep learning natural environment model database;
a11: establishing a convolutional neural network;
a12: establishing a natural environment recognition training model;
a13: acquiring a high-resolution remote sensing environment image, and carrying out preprocessing such as noise filtering and smoothing on influence to obtain a preprocessing top view;
a14: automatically recognizing the preprocessed top view by adopting an environment model in a model library, wherein the automatic recognition is obtained by convolutional neural network training;
a15: and generating an environment model according to the recognition result.
2. The method for automatically constructing the VR natural environment based on big data and AI of claim 1, wherein the layer-by-layer skinning refinement algorithm specifically includes the following processes: firstly, finding a pixel on the edge of a line-marked image; the pixel is taken as the center, the gray values of 8 neighborhoods of the pixel are detected according to a certain sequence, whether the central pixel is set to be 0 or not is determined, and the edge pixel adjacent to the central pixel is found to continue to track peeling.
3. The method for automatically constructing the VR natural environment based on big data and AI of claim 2, wherein the limiting conditions of the layer-by-layer skinning refinement algorithm are as follows: the end points of the line segment are not eliminated, the originally connected points are not interrupted, and the area is not excessively etched.
4. The big-data-and-AI-based VR natural environment auto-construction method of claim 1, the machine connection including: traversing the contour line image to obtain a breakpoint, and seeking another breakpoint by adopting a window operator with the breakpoint as a center; connecting the two break points after clearing the other break point;
wherein, the limit condition of the another breakpoint is as follows: the other neighborhood points of the two breakpoints are not in the same direction; the two break points are located on the same side of any contour line.
5. The method for VR natural environment automatic construction based on big data and AI according to claim 1, wherein a07 comprises the following processes:
b00: determining a starting point of a search line segment and using the starting point as a current point;
b01: taking the current point as a center, and searching a next untracked point according to 8 directions of northwest, north, northeast, east, southeast, south, southwest and west; if there are no points, exit; if there is a point, recording the coordinate and the searching direction of the point, and determining the searching direction of the next point;
b02: b01 is executed by taking the newly found point as a new discrimination center according to the searching direction determined by B01;
b03: all points on the line segment are traced or the other end of the tracing is terminated.
6. The method for VR natural environment auto-construction based on big data and AI of claim 1, wherein a09 specifically includes the following:
c00: selecting import from 3DMAX software, and then selecting the DED file generated in the previous step in a box of a terrain file selection pair walk on tiptoe;
c01: reasonably setting control parameters of the model file in an Import Tertain dialog box;
c02: previewing the terrain to be generated, and clicking to determine to generate a three-dimensional terrain model.
CN201911070540.6A 2019-11-05 2019-11-05 VR natural environment automatic construction method based on big data and AI Active CN110910496B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911070540.6A CN110910496B (en) 2019-11-05 2019-11-05 VR natural environment automatic construction method based on big data and AI

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911070540.6A CN110910496B (en) 2019-11-05 2019-11-05 VR natural environment automatic construction method based on big data and AI

Publications (2)

Publication Number Publication Date
CN110910496A CN110910496A (en) 2020-03-24
CN110910496B true CN110910496B (en) 2023-04-18

Family

ID=69816405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911070540.6A Active CN110910496B (en) 2019-11-05 2019-11-05 VR natural environment automatic construction method based on big data and AI

Country Status (1)

Country Link
CN (1) CN110910496B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115237503B (en) * 2022-08-01 2024-04-26 广州市影擎电子科技有限公司 VR-based ecological model building method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942828A (en) * 2014-01-21 2014-07-23 中国科学院遥感与数字地球研究所 Culture-heritage three-dimensional-scene generation system and method
CN106547880A (en) * 2016-10-26 2017-03-29 重庆邮电大学 A kind of various dimensions geographic scenes recognition methodss of fusion geographic area knowledge
CN107527038A (en) * 2017-08-31 2017-12-29 复旦大学 A kind of three-dimensional atural object automatically extracts and scene reconstruction method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9767565B2 (en) * 2015-08-26 2017-09-19 Digitalglobe, Inc. Synthesizing training data for broad area geospatial object detection
CN105913485B (en) * 2016-04-06 2019-02-12 北京小小牛创意科技有限公司 A kind of generation method and device of three-dimensional virtual scene

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942828A (en) * 2014-01-21 2014-07-23 中国科学院遥感与数字地球研究所 Culture-heritage three-dimensional-scene generation system and method
CN106547880A (en) * 2016-10-26 2017-03-29 重庆邮电大学 A kind of various dimensions geographic scenes recognition methodss of fusion geographic area knowledge
CN107527038A (en) * 2017-08-31 2017-12-29 复旦大学 A kind of three-dimensional atural object automatically extracts and scene reconstruction method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于VPB和osgGIS的流域三维虚拟环境建模方法;张尚弘等;《水力发电学报》;20120625(第03期);全文 *

Also Published As

Publication number Publication date
CN110910496A (en) 2020-03-24

Similar Documents

Publication Publication Date Title
CN111986099B (en) Tillage monitoring method and system based on convolutional neural network with residual error correction fused
CN110136170B (en) Remote sensing image building change detection method based on convolutional neural network
JP6739517B2 (en) Lane recognition modeling method, device, storage medium and device, and lane recognition method, device, storage medium and device
CN109598794B (en) Construction method of three-dimensional GIS dynamic model
CN110458895B (en) Image coordinate system conversion method, device, equipment and storage medium
Rottensteiner et al. The ISPRS benchmark on urban object classification and 3D building reconstruction
Hormese et al. Automated road extraction from high resolution satellite images
CN112818925B (en) Urban building and crown identification method
CN111414954B (en) Rock image retrieval method and system
CN114898212B (en) Method for extracting multi-feature change information of high-resolution remote sensing image
CN115375868B (en) Map display method, remote sensing map display method, computing device and storage medium
Xie et al. OpenStreetMap data quality assessment via deep learning and remote sensing imagery
CN104217459A (en) Spherical feature extraction method
Alidoost et al. Y-shaped convolutional neural network for 3d roof elements extraction to reconstruct building models from a single aerial image
CN114463623A (en) Method and device for detecting farmland change based on multi-scale remote sensing image
CN109727255B (en) Building three-dimensional model segmentation method
CN113033386B (en) High-resolution remote sensing image-based transmission line channel hidden danger identification method and system
CN110910496B (en) VR natural environment automatic construction method based on big data and AI
CN117171533B (en) Real-time acquisition and processing method and system for geographical mapping operation data
CN114758087B (en) Method and device for constructing urban information model
Ruiz-Lendínez et al. Deep learning methods applied to digital elevation models: state of the art
Kulkarni et al. “Parametric Methods to Multispectral Image Classification using Normalized Difference Vegetation Index
Yuan et al. Graph neural network based multi-feature fusion for building change detection
Patel et al. Road Network Extraction Methods from Remote Sensing Images: A Review Paper.
Widyaningrum et al. Tailored features for semantic segmentation with a DGCNN using free training samples of a colored airborne point cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 230000 future center of advanced technology research institute of China University of science and technology, Wangjiang West Road, high tech Development Zone, Hefei City, Anhui Province 1105b

Applicant after: Anhui Chengfang Intelligent Technology Co.,Ltd.

Address before: 230000 future center of advanced technology research institute of China University of science and technology, Wangjiang West Road, Hefei high tech Development Zone, Hefei City, Anhui Province 1105b

Applicant before: HEFEI CHENGFANG INFORMATION TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant