CN116843843A - March road line three-dimensional scene simulation method - Google Patents
March road line three-dimensional scene simulation method Download PDFInfo
- Publication number
- CN116843843A CN116843843A CN202310244752.1A CN202310244752A CN116843843A CN 116843843 A CN116843843 A CN 116843843A CN 202310244752 A CN202310244752 A CN 202310244752A CN 116843843 A CN116843843 A CN 116843843A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- road
- landform
- pixel
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000004088 simulation Methods 0.000 title claims abstract description 33
- 238000012545 processing Methods 0.000 claims abstract description 15
- 238000012549 training Methods 0.000 claims description 33
- 238000003062 neural network model Methods 0.000 claims description 14
- 230000011218 segmentation Effects 0.000 claims description 13
- 238000011176 pooling Methods 0.000 claims description 11
- 238000013528 artificial neural network Methods 0.000 claims description 8
- 238000012876 topography Methods 0.000 claims description 7
- 238000009877 rendering Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 5
- 238000002372 labelling Methods 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 5
- 239000003086 colorant Substances 0.000 description 4
- 238000013139 quantization Methods 0.000 description 3
- 241000209504 Poaceae Species 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 235000018185 Betula X alpestris Nutrition 0.000 description 1
- 235000018212 Betula X uliginosa Nutrition 0.000 description 1
- 235000008331 Pinus X rigitaeda Nutrition 0.000 description 1
- 235000011613 Pinus brutia Nutrition 0.000 description 1
- 241000018646 Pinus brutia Species 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000011049 filling Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000011068 loading method Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000005215 recombination Methods 0.000 description 1
- 230000006798 recombination Effects 0.000 description 1
- 230000001932 seasonal effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/13—Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/04—Architectural design, interior design
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Geometry (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Architecture (AREA)
- Civil Engineering (AREA)
- Structural Engineering (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Computer Graphics (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a marching road line three-dimensional scene simulation method, which comprises the following steps: acquiring road data corresponding to a marching road between a departure place and a destination, and elevation data along the road and a two-dimensional nodding image; performing target recognition on the two-dimensional nodding image to obtain the geographic feature data of the road along the line, and performing space coordinate matching with the road data and the elevation data along the line in a three-dimensional geographic coordinate system; the data correspondingly generate simulated landform scenes, simulated roads and three-dimensional simulated terrains, render the simulated landform scenes, and generate three-dimensional scenes along the march roads. The method is suitable for carrying out standardized processing on two-dimensional nodding images along the road with various specifications, can accurately identify two-dimensional landform objects therein, and can rapidly generate three-dimensional simulation scenes along the road, and has the advantages of wide application range, low implementation difficulty and high efficiency.
Description
Technical Field
The invention relates to the technical field of computer simulation, in particular to a three-dimensional scene simulation method along a marching road.
Background
In the fields of military, anti-terrorism, emergency rescue, etc., it is often necessary to dispatch a professional team to a strange area to perform a task, during which time it is necessary to travel along a planned road in a motor marching manner, such as driving special equipment vehicles, rescue vehicles, etc.
In order to accurately evaluate the line of the marching road before marching, which is beneficial to improving the marching speed, enhancing the pre-judgment, avoiding accidents, making various preparations in advance, and the like, it is generally required to be familiar with the natural humane condition of the line of the marching road as early as possible. Therefore, a simulation environment along the march needs to be quickly constructed, so that the march efficiency can be prejudged and evaluated in advance, and the full preparation is made for actual road march.
Disclosure of Invention
The invention mainly solves the technical problem of providing a march road line three-dimensional scene simulation method. The technical problem that a corresponding three-dimensional simulation scene is quickly built along a marching road is lacking in the prior art is solved.
In order to solve the technical problems, the invention adopts a technical scheme that a three-dimensional scene simulation method along a march road is provided, which comprises the following steps:
the beneficial effects of the invention are as follows: the invention discloses a marching road line three-dimensional scene simulation method, which comprises the following steps: performing specification consistency processing on the two-dimensional landform nodding image sample to obtain a standardized pattern, and marking and cutting the two-dimensional landform in the standardized pattern to obtain a plurality of two-dimensional landform training samples; extracting image features of a two-dimensional landform training sample, training to obtain a two-dimensional landform recognition neural network model, carrying out target recognition on an application image containing a nodding two-dimensional landform by using the model, extracting two-dimensional landform feature data, inputting the two-dimensional landform feature data into a three-dimensional landform simulation environment, carrying out three-dimensional landform height and exterior simulation layout, and rapidly generating a corresponding three-dimensional landform scene. The method is suitable for standardized processing of the two-dimensional landform nodding images with various specifications, can accurately identify two-dimensional landform objects in the two-dimensional landform nodding images, and can quickly generate corresponding three-dimensional landform scenes, and has the advantages of wide application range, low implementation difficulty and high efficiency.
Drawings
FIG. 1 is a flow chart of one embodiment of a method for simulating a three-dimensional scene along a march road in accordance with the present invention;
FIG. 2 is a schematic diagram of a pixel coordinate system in another embodiment of a method for simulating a three-dimensional scene along a march road according to the present invention;
FIG. 3 is a schematic view of application images identified as a plurality of topographical feature data in another embodiment of a method for simulating a three-dimensional scene along a march road in accordance with the present invention;
FIG. 4 is a schematic representation of the display of elevation data in a three-dimensional geographic coordinate system in another embodiment of a method for simulating a three-dimensional scene along a march road in accordance with the present invention;
FIG. 5 is a schematic representation of rendering of a terrain background with a single color for elevation data in another embodiment of a method for simulating three-dimensional scenes along a march road in accordance with the present invention;
FIG. 6 is a schematic diagram of a woodland relief simulation in another embodiment of a method for simulating a three-dimensional scene along a march road according to the present invention;
FIG. 7 is a schematic diagram of a simulation of a grassland topography in another embodiment of a method for simulating a three-dimensional scene along a march road according to the present invention;
FIG. 8 is a schematic view of a three-dimensional landform scene in another embodiment of a method for simulating a three-dimensional scene along a march road according to the present invention;
FIG. 9 is a schematic view of road data display in another embodiment of a method for simulating three-dimensional scene along a march road according to the present invention;
FIG. 10 is an illustration of a three-dimensional scene along a march road in another embodiment of a method for simulating a three-dimensional scene along a march road according to the invention;
FIG. 11 is a schematic diagram of normalized segmentation of an image in another embodiment of a method for simulating a three-dimensional scene along a march road according to the present invention;
FIG. 12 is an illustration of two-dimensional geotagging and cropping of standardized patterns in another embodiment of a method for simulating three-dimensional scenes along a march road in accordance with the present invention;
FIG. 13 is a composition diagram of an initial neural network recognition model in another embodiment of a method for simulating a three-dimensional scene along a march road in accordance with the present invention;
FIG. 14 is a schematic view of a two-dimensional relief object identified by a standardized pattern in another embodiment of a method for simulating a three-dimensional scene along a march road according to the present invention.
Detailed Description
In order that the invention may be readily understood, a more particular description thereof will be rendered by reference to specific embodiments that are illustrated in the appended drawings. Preferred embodiments of the present invention are shown in the drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The term "and/or" as used in this specification includes any and all combinations of one or more of the associated listed items.
FIG. 1 shows a flow chart of one embodiment of a method for simulating a three-dimensional scene along a march road, comprising the steps of:
step S1: acquiring road data corresponding to a march road between a departure place and a destination, elevation data along the road and a two-dimensional nodding image along the road;
step S2: performing target recognition on the two-dimensional nodding image along the road to obtain the landform feature data along the road, inputting the road data and the elevation data along the road in a three-dimensional geographic coordinate system, and performing space coordinate matching with the landform feature data along the road;
step S3: generating a simulated landform scene corresponding to the road line landform feature data, generating a simulated road corresponding to the road data, generating a three-dimensional simulated terrain corresponding to the elevation data, and rendering the three-dimensional simulated terrain by using the simulated landform scene and the simulated road according to the road line landform feature data and the spatial information features of the road data in the three-dimensional geographic coordinate system to generate a three-dimensional scene along the marching road.
According to the three-dimensional scene simulation method along the march road shown in fig. 1, on one hand, a two-dimensional depression image (such as a two-dimensional image of the ground or a two-dimensional avionic image) along the road is utilized to obtain the actual landform characteristic of a space region, and the landform characteristic has dynamic variability, such as the influence of rain and snow weather on the landform, the influence of seasonal variation on the landform and the influence of natural disasters and human behaviors on the landform; on the other hand, the road data corresponding to the march road is used for determining the march road, and the elevation data along the road is used for reflecting the relief change of a space region, so that the road has stability unless the topography change caused by mountain filling, sea making, earthquake collapse and the like exists, and the change has longer stability before or after the occurrence.
Therefore, through the embodiment of the invention, when constructing and presenting a marching road three-dimensional scene corresponding to an actual space region in a simulation environment, the marching road three-dimensional scene can be realized more quickly and accurately, and meanwhile, the requirements on efficiency and accuracy are met.
Preferably, in step S1, the two-dimensional nodding image along the road is represented by a pixel coordinate system, as shown in fig. 2, where the lower left corner of the image is the origin pixel coordinate (0, 0), the abscissa is incremented to the right by one pixel, for example, the maximum value Xmax-1 is 1023, which indicates that xmax=1024 pixels are horizontally shared, and the ordinate is incremented upward by one pixel, for example, the maximum value Ymax-1 is 767, which indicates that ymax=768 pixels are vertically shared. Thus, the pixel coordinates and pixel values in the image can be determined by passing a corresponding pixel value for each pixel coordinate in an image with a total of 1024×768 pixels. Then, based on the coordinates, the pixel value corresponding to each pixel coordinate is represented by three primary colors (R, G, B).
Preferably, in step S2, the method for obtaining the topographic feature data along the road includes: performing target recognition on a two-dimensional nodding image along a road by using the trained landform recognition neural network model, and extracting first landform feature data to N-th landform feature data which are respectively corresponding to a plurality of different landform types, wherein N represents the category number of the landform types, and N is more than or equal to 2; the spatial coordinate matching of the road along-line landform feature data comprises the following steps: and matching the first to Nth landform feature data with the spatial coordinates of the road data and the elevation data along the line.
Preferably, the first to nth topographical feature data are represented based on the pixel coordinate system shown in fig. 2, wherein the first topographical feature data are represented as:
the nth topographical feature data is represented as:
wherein A is 1 Representing a set of pixel values of pixel points comprised by the first topographical feature data in a pixel coordinate system,pixel value representing origin pixel coordinates (0, 0) of the first relief feature data in the pixel coordinate system,/->Pixel values representing pixel coordinates (0, 1) of the first topographical feature data in the pixel coordinate system, and so on>A pixel value representing a maximum pixel coordinate (Xmax-1, ymax-1) of the first topographical feature data in the pixel coordinate system; a is that N Representing a set of pixel values of pixel points comprised by the nth relief feature data in a pixel coordinate system, ++>Pixel value of origin pixel coordinate (0, 0) representing nth relief feature data in pixel coordinate system,/->Pixel values representing pixel coordinates (0, 1) of the nth topography feature data in the pixel coordinate system, and so on>Representing the pixel value of the maximum pixel coordinate (Xmax-1, ymax-1) of the nth topographical feature data in the pixel coordinate system.
Preferably, the first to nth landform feature data can be seen to be a data set established based on the same pixel coordinate system, so that the data can be superimposed to obtain various landform feature data of two-dimensional nodding images along the road. Thus, in turn, in step S2, the trained neural network model (described in detail below) is used to decompose the different features in the two-dimensional nodding images along the road, so as to obtain first to nth feature data, where the first to nth feature data are established based on the same pixel coordinate system of the two-dimensional nodding images along the road.
For example, as shown in fig. 3, the two-dimensional nodding image a11 is obtained by performing feature recognition by a trained feature recognition neural network model to generate first feature data a 1 Corresponding mountain Dan Demao, second topographical feature data A 2 Corresponding woodland landform, third landform characteristic data A 3 Corresponding cultivated land landform, fourth landform characteristic data A 4 Corresponding water body landform and fifth landform characteristic data A 5 Corresponding grassland topography.
Preferably, when the region range corresponding to the two-dimensional nodding image a11 is small, it corresponds to the actual geographic space, and the actual region corresponding to the two-dimensional nodding image a11 along the road can be regarded as a planar region instead of a curved region (but keeps the difference in elevation) without considering the influence of the curvature radius of the earth surface, so that the distance between any two points on the image can be corresponding to the actual spatial distance according to the scale, and the pixel coordinates of any point on the image can also be correspondingly converted into a planar region with one position point on the planar geographic space as the center. Therefore, in general, the higher the resolution of the pixel corresponding to the two-dimensional nodding image a11, the smaller the spatial region corresponding to the single pixel, which indicates the smaller the spatial region corresponding to the image. For example, two-dimensional nodding images of the same pixel size, one representing 5 square kilometers and one representing 8 square kilometers, are obviously displayed with higher definition or sharpness, the smaller the actual spatial range corresponding to each pixel of the former.
Preferably, in step S2, the first to nth geofeature data are mapped to a three-dimensional geographic coordinate system in the simulation environment by mapping the first to nth geofeature data to the three-dimensional geographic coordinate system X, respectively 1 Y 1 Z 1 At X 1 Y 1 Coordinate point (x) 1 ,y 1 ). Since the size of the space area occupied by the pixel corresponding to each pixel coordinate in the geomorphic feature data is different, the corresponding coordinate point refers to the geometric center of the space area. Thus, the spatial information features of the first to Nth geofeature data in the three-dimensional geographic coordinate system include the space occupied by each pixel coordinate in the three-dimensional geographic coordinate systemCoordinate points of the geometric center of the inter-region, and spatial region ranges corresponding to each pixel.
Preferably, for the elevation data along the road, when the elevation data is correspondingly input into the three-dimensional geographic coordinate system, the method for matching the first to nth geographic feature data with the spatial coordinates of the elevation data is as follows: let each three-dimensional coordinate point (x 2 ,y 2 ,z 2 ) Corresponding planar coordinate point (x 2 ,y 2 ) And a plane coordinate point (x 1 ,y 1 ) And correspondingly and consistently obtaining the elevation value of each coordinate point in the three-dimensional geographic coordinate system. As shown in fig. 4, the elevation data is displayed in a three-dimensional geographic coordinate system, and it is obvious that the display result of the elevation data only indicates the fluctuation of the terrain, and the corresponding landform features cannot be accurately obtained.
Preferably, the three-dimensional elevation data is correspondingly generated into three-dimensional simulated terrain, that is, the three-dimensional elevation data is rendered and displayed with a single color for the terrain background, as shown in fig. 5.
Preferably, in order to quickly form a visual display image in a simulation environment, different types of landform feature data need to be converted into a landform scenery which is constructed in a simulation mode, forest lands such as pine forest, shrub forest and birch forest can be constructed in a simulation mode if the landform scenery corresponds to fig. 6, grasslands landforms such as grasses of different types and different colors can be constructed in a simulation mode if the grasses of different colors correspond to fig. 7, and the ground can be covered. Therefore, according to the spatial information features of the first to Nth geofeature data in the three-dimensional geographic coordinate system, the spatial coverage and rendering display can be carried out on the region range occupied by the pixels with the same geofeature type in the three-dimensional geographic coordinate system by using the geomorphic scenery constructed by the corresponding simulation.
Based on the same three-dimensional simulation environment and the three-dimensional simulation terrain and landform scenes under the same geographic coordinate system, the three-dimensional landform scenes can be displayed in a superimposed rendering mode, and the three-dimensional landform scenes are rapidly generated as shown in fig. 8.
For road data, it may come from an open sourceRoad data information, such as data sources, is Open Street Map (OSM). As shown in fig. 9, these road data generally have two-dimensional spatial coordinate information that can be used for each three-dimensional coordinate point (x 2 ,y 2 ,z 2 ) Corresponding planar coordinate point (x 2 ,y 2 ) And a plane coordinate point (x 1 ,y 1 ) The correspondence is consistent. Correspondingly, when the road data are displayed, the road data are also required to be converted into simulated roads, and then the simulated roads and the three-dimensional simulated terrain are subjected to superposition rendering display under the same geographic coordinate system, so that a three-dimensional scene with roads and landforms is generated, as shown in fig. 10.
Further, the construction and training of the landform recognition neural network model is further described below, including:
step S01: inputting a two-dimensional landform nodding image sample, and performing specification consistency processing on the two-dimensional landform nodding image sample to obtain one or more standardized patterns;
step S02: labeling and cutting two-dimensional landforms in the standardized pattern to obtain a plurality of two-dimensional landform training samples;
step S03: and extracting the image characteristics of the two-dimensional landform training sample, inputting the image characteristics into an initial neural network recognition model, and performing recognition training on the two-dimensional landform target in the standardized pattern to finally obtain a landform recognition neural network model.
In step S01, a two-dimensional landform nodding image sample T 1 Which comprises at least two parameters, namely the pixel quantity p 1 And a space value k 1 Wherein the pixel quantity represents the number of image pixels of the sample, the space value represents the area of the actual space presented by the sample, and the corresponding representation is T 1 (p 1 ,k 1 ). Typically the desired pixel quantity p 1 As large as possible and spatial value k 1 As small as possible, so that the larger the pixel value per unit area of the image sample, i.e. the pixel quantity p 1 And a space value k 1 Ratio p of (2) 1 /k 1 The larger the image resolution representing the video sample, the higher.
Preferably, the obtained two-dimensional landform nodding image sample T 1 There are various sources such as remote sensing images taken by remote sensing satellites, aerial images taken by aerial vehicles, and near-overhead images of unmanned aerial vehicles taken by commercial unmanned aerial vehicles in close-up overhead view. When two-dimensional landform nodding image samples with multiple sources exist, the samples with different sources need to be processed with specification consistency.
Preferably, in step S01, the two-dimensional topography nodding image sample is subjected to a specification consistency process to make the pixel amounts p of different samples 1 And a space value k 1 Ratio p of (2) 1 /k 1 Equal or near equal. Therefore, in application, classification is generally performed according to different image sources, so as to ensure the requirement of specification consistency processing. And the image can be compressed or interpolated, so as to reduce or increase the pixel value of the adjusted image sample.
In addition, there is also the pixel quantity p 1 And a space value k 1 Ratio p of (2) 1 /k 1 The minimum threshold requirement exists because if the ratio is too small, the resolution recognition degree of the two-dimensional landforms is reduced, the features such as the outline, the color, the type and the like of the two-dimensional landforms are difficult to accurately recognize, and the types of the two-dimensional landforms cannot be accurately distinguished, so that the two-dimensional landforms cannot be accurately marked and accurately recognized.
Preferably, in step S01, the specification consistency processing is performed on the two-dimensional landform nodding image sample, which further includes performing uniform pixel size processing on the image sample, for example, performing standardized segmentation on an image with a larger pixel value, and dividing the image into a plurality of standardized patterns with the same pixel size, and simultaneously ensuring that corresponding spatial values are the same or close to each other. The standardized pattern thus obtained is not only the pixel quantity p 1 And a space value k 1 Ratio p of (2) 1 /k 1 The same or close, also the pixel quantity p 1 Is itself identical or close to the corresponding space value k 1 The same or similar in nature.
Preferably, as shown in fig. 11, the division of the image with a large number of pixels is performed in a standardized mannerIn FIG. 11, the two-dimensional relief nodding image sample T 1 Is a pixel quantity p of (2) 1 Is formed by the number x of horizontal pixels 1 And the number y of longitudinal pixels 1 Multiplication by, i.e. p 1 =x 1 y 1 . Then, the two-dimensional landform is subjected to nodding of an image sample T 1 Performing segmentation, wherein the transverse segmentation ratio is n 1 For example n in FIG. 11 1 =7, longitudinal split ratio is m 1 For example m in FIG. 11 1 =3, the number of horizontal pixels x corresponding to the normalized pattern of the same pixel size obtained 2 =x 1 /n 1 And the number y of longitudinal pixels 2 =y 1 /m 1 . Thus, the two-dimensional landform can be subjected to the depression imaging of the image sample T 1 The segmentation is divided into a plurality of sets of standardized patterns, namely:wherein T is 1,1 、T 1,2 、…、/>Respectively represent the divided normalized patterns having the same number of pixels.
Since these patterns are presented on a pixel basis, they can be represented entirely by the pixel coordinate system, i.e. the pixel coordinates and pixel values corresponding to the pixels, with particular reference to the description of fig. 2 above.
Preferably, when the two-dimensional landform nodding image sample is divided into a plurality of standardized patterns, the pixel coordinate system is adopted for pixel representation of each standardized pattern, so that unified standardized processing is facilitated, and recombination is facilitated after each standardized pattern is processed, and the two-dimensional landform nodding image sample is processed.
In connection with step S02, fig. 12 shows an example of two-dimensional landform labeling and cropping of the standardized pattern, in which a plurality of two-dimensional landform training samples 11 to 15 can be seen. The pixel characteristics of the two-dimensional landform training samples comprise white pixels, dark green pixels, light green pixels and the like. And the spatial orientation features of the two-dimensional landform training samples can be determined according to the spatial orientation of the shot image in the map, for example, the azimuth angle of each side of the training sample is determined by taking the north orientation as 0-angle reference.
Thus, when the two-dimensional relief training sample is denoted as J i Where i represents a sequence number indicating that there are a plurality of different two-dimensional relief training samples. Further, image features representing the two-dimensional landform training sample are correspondingly arrangedWherein->The corresponding pixel characteristics comprise the number of pixels (or the percentage of pixels, namely the percentage value of the number of pixels of the sample to the total number of pixels of the standardized pattern), the color value of the pixels, the actual space size corresponding to a single pixel and the like; />The corresponding space azimuth features comprise azimuth angles of all sides; />The corresponding picture is in a picture format, such as jpg, tiff and other different picture storage formats.
Obtaining more two-dimensional landform training samples as much as possible through the step S02, and then extracting the image features of the two-dimensional landform training samples in the step S3 to obtain the image features of each two-dimensional landform training sampleThen training the initial neural network recognition model by using the two-dimensional landform training samples to continuously adjust the setting parameters of the initial neural network recognition model, and finally accurately recognizing the two-dimensional landform target from the standardized pattern, thereby obtaining the corresponding two-dimensional landform recognition neural networkAnd (5) a model.
Preferably, as shown in fig. 13, the initial neural network recognition model is an image recognition network mainly constructed by taking a convolutional neural network as a main body, and comprises a first-stage feature extraction network N1 for extracting image features in an input image to obtain a feature map N2, wherein an output result of the first-stage feature extraction network N1 also enters a second-stage region suggestion network N3 for region selection, then different divided regions are applied to the feature map N2, namely, the feature map N2 is divided into regions, and then the third-stage pooling layer N4 is used for pooling processing, and processing results are respectively input to three recognition networks in parallel, namely, a classification recognition network N5, a frame recognition network N6 and a pixel recognition network N7, and are respectively used for carrying out type recognition on two-dimensional features in the image, carrying out recognition labeling on frames of the two-dimensional features, carrying out recognition division on pixels in the two-dimensional features, and then outputting recognition images subjected to recognition labeling.
Preferably, for the first-stage feature extraction network N1, a RestNet convolutional neural network may be used to extract two-dimensional landform features in the image, so as to obtain a segmentation result of the two-dimensional landform. For example, resNet18, resNet34, resNet50, resNet101, and ResNet152 networks may be mentioned.
Preferably, the second-level regional suggestion network N3 corresponds to RPN (Region proposal network), and is configured to perform a two-dimensional geomorphic target frame regression operation on each candidate region, generate a two-dimensional geomorphic target regression frame, and obtain a class likelihood size. Specifically, the second-level regional suggestion network N3 is used to scan the feature map N2 and attempt to determine a region containing a two-dimensional geomorphic target. For example, the RPN network convolves feature maps of different scales, generating 3 anchor points (anchors) at each location, where 3 x 4 convolution kernels (landform color class 3 and background) are generated for a two-dimensional landform. And connecting two full-connection layers after the convolution kernel to finish the discrimination of the foreground (target) and the background (background) of each pixel and the regression correction of the two-dimensional landform target frame.
Preferably, the N4 correspondence for the third level pooling layer may be a target region pooling network (Region Of Interest Pooling) or an ROI alignment network. Pooling processing corresponding to the ROIPooling network comprises the steps of quantifying and rounding boundary coordinates of a target area; this region is then divided into m x m number of cells and the boundaries of these cells are again quantized and rounded. These quantization operations may deviate the boundary values from the original ones, which will affect the accuracy of the segmentation, although this may not affect the classification.
The pooling process corresponding to the ROI alignment is to cancel the quantization rounding operation of the boundary coordinates, and calculate the feature accurate value corresponding to the boundary in m x m units by using bilinear interpolation, so as to convert the pooling process of the target region into continuous operation. The specific operation is as follows: 1. and traversing each target area, and carrying out no quantization rounding on the boundary. 2. Each target area is divided into m x m units, and the boundary of each unit is not quantized and rounded. 3. The values of the four coordinate positions in each cell are calculated by bilinear interpolation. 4. And carrying out maximum value pooling operation on each unit to obtain a feature map with the size of m.
Preferably, for the classification recognition network N5, the correspondence may be a full convolutional network (Fully-Convolutional Network, FCN) for predicting two-dimensional landform categories.
Preferably, for the frame recognition network N6, the correspondence may be a full convolution network (FCN, fully-Convolutional Network) for generating regression frames corresponding to the two-dimensional geomorphic shapes, respectively.
Preferably, the pixel recognition network N7 may be a full convolution network (Fully-Convolutional Network, FCN), for example, comprising 4 consecutive convolution layers and one deconvolution layer, wherein each convolution layer has a kernel size of 256 channels of 3×3 followed by a 2-up-sampled deconvolution layer for predicting the binary pixel values of each two-dimensional relief object.
Thus, the final recognition image outputs three recognition results: the class of the object, the regression frame coordinates of the object, and the pixel value of the object.
In step S03, training the initial neural network recognition model by using the two-dimensional landform training sample, which mainly includes setting network parameters, adjusting network configuration, such as initial learning rate, number of epochs (refer to a complete data set passing through the neural network model once and returning once, this process is called once epochs), and so on; and loading pre-training weights to start training a network, and storing weight results obtained by training. And finally obtaining the required two-dimensional landform recognition neural network model.
In the training process, network configuration and parameters are continuously optimized and adjusted, firstly, a two-dimensional landform training sample can be accurately identified, then, an image sample for verification is input for detection verification, and the accuracy and reliability of the two-dimensional landform identification neural network model for detecting the two-dimensional landform are evaluated.
Preferably, in conjunction with fig. 1, it is also necessary to divide the two-dimensional nodding image along the road into a plurality of standardized patterns, and then input the standardized patterns into the two-dimensional landform recognition neural network model for two-dimensional landform target recognition.
Preferably, referring to the embodiment of FIG. 11, a two-dimensional nodding image along the road is shown as Q 1 The segmentation is divided into a plurality of sets of standardized patterns, namely:wherein Q is 1,1 、Q 1,2 、…、/>Respectively represent standardized patterns having the same number of pixels after the two-dimensional nodding image along the road is divided, and the standardized patterns corresponding to the aforementioned samples in the training process have the same number of pixels. n is n 2 Represents the transverse segmentation ratio, m, of a two-dimensional nodding image along a road 2 The longitudinal segmentation ratio of the two-dimensional nodding image along the road is shown.
Further, the obtained two-dimensional landform feature data is respectively corresponding to a two-dimensional landform data set, namely Q 1,1 、Q 1,2 、…、Respectively corresponding to two-dimensional landform data setsFor any of the subsets q of topographical data in the set e,r ,e∈(1,m 2 ),r∈(1,n 2 ) The two-dimensional landform features extracted from the two-dimensional landform features are represented by taking a pixel coordinate system as a reference.
Preferably, continuing with FIG. 12 for example, if the graph shows a normalized pattern extracted and identified relief data subset q of a two-dimensional nodding image of the road along the road e,r The relief data subset q e,r Comprising a plurality of identified two-dimensional relief objects j Where j represents a sequence number, as shown in FIG. 14 with l identified 1 To l 9 A two-dimensional relief object. For any two-dimensional relief object j All of which are represented by data features with reference to a pixel coordinate systemWherein->Corresponding to a set of pixel coordinates of the two-dimensional relief object, the combination of which together forms a convenient outline of the two-dimensional relief object and a pixel composition inside the outline, e.g.>Can be expressed asWherein (x) 1 ',y 1 ') represents the coordinate value corresponding to one pixel in the two-dimensional relief object, and other coordinates have the same meaning, and g is the sum of the coordinates 1 Corresponding to g of the coordinates 1 And each pixel. />Corresponding to a set of pixel values of a plurality of pixels of the two-dimensional relief object, each pixel value representing a color value of the pixel, e.g.)>Can be expressed as +.>Wherein->The pixel value representing a pixel in the two-dimensional relief object may comprise three values of the three primary colors red, green and blue, and thus is represented here by a vector, the other pixel values having the same meaning, here corresponding to g 1 Pixel values of the individual pixels. />Corresponding to the spatial orientation feature, a clockwise angle between the direction of the line connecting the two pixel coordinates and the north direction can be defined, for example +.>Representing coordinates (x) 1 ',y 1 ') to (x) 2 ',y 2 ') and the clockwise included angle between the connecting line direction and the north direction is alpha.
Preferably, the obtained two-dimensional landform feature data are combined by using a divided standardized pattern, and the plurality of identified two-dimensional landform objects in the corresponding landform data subset are data features established by taking a pixel coordinate system as a reference, and the data features can be subjected to geographic space layout in combination with specific azimuth, so that the two-dimensional landform feature data can be input into a simulation environment of a three-dimensional space for space simulation processing.
Therefore, the invention discloses a march road line three-dimensional scene simulation method, which comprises the following steps: performing specification consistency processing on the two-dimensional landform nodding image sample to obtain a standardized pattern, and marking and cutting the two-dimensional landform in the standardized pattern to obtain a plurality of two-dimensional landform training samples; extracting image features of a two-dimensional landform training sample, training to obtain a two-dimensional landform recognition neural network model, carrying out target recognition on an application image containing a nodding two-dimensional landform by using the model, extracting two-dimensional landform feature data, inputting the two-dimensional landform feature data into a three-dimensional landform simulation environment, carrying out three-dimensional landform height and exterior simulation layout, and rapidly generating a corresponding three-dimensional landform scene. The method is suitable for standardized processing of the two-dimensional landform nodding images with various specifications, can accurately identify two-dimensional landform objects in the two-dimensional landform nodding images, and can quickly generate corresponding three-dimensional landform scenes, and has the advantages of wide application range, low implementation difficulty and high efficiency.
The foregoing description is only illustrative of the present invention and is not intended to limit the scope of the invention, and all equivalent structural changes made by the present invention and the accompanying drawings, or direct or indirect application in other related technical fields, are included in the scope of the present invention.
Claims (9)
1. A march road line three-dimensional scene simulation method is characterized by comprising the following steps:
acquiring road data corresponding to a march road between a departure place and a destination, elevation data along the road and a two-dimensional nodding image along the road;
performing target recognition on the two-dimensional nodding image along the road to obtain the landform feature data along the road, inputting the road data and the elevation data along the road in a three-dimensional geographic coordinate system, and performing space coordinate matching with the landform feature data along the road;
generating a simulated landform scene corresponding to the road line landform feature data, generating a simulated road corresponding to the road data, generating a three-dimensional simulated terrain corresponding to the elevation data, and rendering the three-dimensional simulated terrain by using the simulated landform scene and the simulated road according to the road line landform feature data and the spatial information features of the road data in the three-dimensional geographic coordinate system to generate a three-dimensional scene along the marching road.
2. The method for simulating a three-dimensional scene along a march road according to claim 1, wherein the method for obtaining the road topography feature data comprises the following steps:
performing target recognition on a two-dimensional nodding image along a road by using the trained landform recognition neural network model, and extracting first landform feature data to N-th landform feature data which are respectively corresponding to a plurality of different landform types, wherein N represents the category number of the landform types, and N is more than or equal to 2;
the spatial coordinate matching of the road along-line landform feature data comprises the following steps: and matching the first to Nth landform feature data with the spatial coordinates of the road data and the elevation data along the line.
3. The method for three-dimensional scene simulation along a march road according to claim 2, wherein the two-dimensional nodding image along the road is represented by a pixel coordinate system, and the first to nth landform feature data are represented based on the pixel coordinate system, wherein the first landform feature data are represented as:
the nth topographical feature data is represented as:
wherein A is 1 Representing a set of pixel values of pixel points comprised by the first topographical feature data in a pixel coordinate system,pixel value representing origin pixel coordinates (0, 0) of the first relief feature data in the pixel coordinate system,/->Pixel coordinates (0, 1) representing the first topographical feature data in a pixel coordinate systemPixel value, and so on, +.>A pixel value representing a maximum pixel coordinate (Xmax-1, ymax-1) of the first topographical feature data in the pixel coordinate system; a is that N Representing a set of pixel values of pixel points comprised by the nth relief feature data in a pixel coordinate system, ++>Pixel value of origin pixel coordinate (0, 0) representing nth relief feature data in pixel coordinate system,/->Pixel values representing pixel coordinates (0, 1) of the nth topography feature data in the pixel coordinate system, and so on>Representing the pixel value of the maximum pixel coordinate (Xmax-1, ymax-1) of the nth topographical feature data in the pixel coordinate system.
4. A method of simulating a three-dimensional scene along a march road as recited in claim 3, wherein the mapping of the first to nth geomorphic feature data into a three-dimensional geographic coordinate system in the simulated environment is performed by mapping each of the first to nth geomorphic feature data into the three-dimensional geographic coordinate system X in a respective manner, with an abscissa and an ordinate in each pixel coordinate of the first to nth geomorphic feature data 1 Y 1 Z 1 At X 1 Y 1 Coordinate point (x) 1 ,y 1 )。
5. A method of three-dimensional scene simulation along a line of a march road according to claim 3, wherein the method of matching the first to nth geomorphic feature data with the spatial coordinates of the elevation data along the line is: letting each three-dimensional coordinate point (x 2 ,y 2 ,z 2 ) Corresponding planar coordinate point (x 2 ,y 2 ) And a plane coordinate point (x 1 ,y 1 ) And correspondingly and consistently obtaining the elevation value of each coordinate point in the three-dimensional geographic coordinate system.
6. The method according to claim 5, wherein the spatial information features of the first to nth geographic feature data in the three-dimensional geographic coordinate system include coordinate points of geometric centers of spatial areas occupied by each pixel coordinate in the three-dimensional geographic coordinate system, spatial area ranges corresponding to each pixel, and regional ranges occupied by pixels having the same geographic feature type in the three-dimensional geographic coordinate system.
7. The method for simulating a three-dimensional scene along a march road according to claim 2, wherein the method for obtaining the geomorphic recognition neural network model comprises:
inputting a two-dimensional landform nodding image sample, and performing specification consistency processing on the two-dimensional landform nodding image sample to obtain one or more standardized patterns;
labeling and cutting two-dimensional landforms in the standardized pattern to obtain a plurality of two-dimensional landform training samples;
and extracting the image characteristics of the two-dimensional landform training sample, inputting the image characteristics into an initial neural network recognition model, and performing recognition training on the two-dimensional landform target in the standardized pattern to finally obtain a landform recognition neural network model.
8. The method for simulating three-dimensional scene along march road according to claim 7, wherein the initial neural network recognition model comprises a first-stage feature extraction network for extracting image features in an input image to obtain a feature map, wherein the output result of the first-stage feature extraction network also enters a second-stage region suggestion network for region selection, and then different divided regions are applied to the feature map to divide the feature map; and then carrying out pooling treatment by a third pooling layer, respectively inputting the treatment results into three identification networks in parallel, namely a classification identification network, a frame identification network and a pixel identification network, respectively carrying out type identification on the two-dimensional landforms in the image, carrying out identification marking on the frames of each two-dimensional landform, carrying out identification on the pixels in the two-dimensional landforms, and then outputting the identification image subjected to identification marking.
9. The method for three-dimensional scene simulation along a march road as recited in claim 8, wherein the two-dimensional nodding image along the road is denoted as Q 1 The segmentation is divided into a plurality of sets of standardized patterns, namely:wherein Q is 1,1 、Q 1,2 、…、/>Respectively represent standardized patterns with the same pixel number after the two-dimensional nodding images along the road are divided, n 2 Represents the transverse segmentation ratio, m, of the two-dimensional nodding image along the road 2 Representing a longitudinal segmentation ratio of a two-dimensional nodding image along the road;
the geomorphic characteristic data is obtained by taking each divided standardized pattern as an object for a two-dimensional nodding image along the road, and respectively corresponding to a two-dimensional geomorphic data set, namely Q 1,1 、Q 1,2 、…、Respectively corresponding to two-dimensional landform data setsFor any of the subsets q of topographical data in the set e,r ,e∈(1,m 2 ),r∈(1,n 2 ) The pixel coordinate system is taken as a reference to represent the extracted two-dimensional landform features;
the relief data subset q e,r Comprising a plurality of identified two-dimensional relief objects j Wherein j represents a sequence number; for any two-dimensional relief object j All of which are represented by data features with reference to a pixel coordinate systemWherein->Corresponding to a set of pixel coordinates of the two-dimensional relief object, ++>A set of pixel values corresponding to a plurality of pixels being said two-dimensional relief object,/->The correspondence is a spatial orientation feature.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310244752.1A CN116843843A (en) | 2023-03-14 | 2023-03-14 | March road line three-dimensional scene simulation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310244752.1A CN116843843A (en) | 2023-03-14 | 2023-03-14 | March road line three-dimensional scene simulation method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116843843A true CN116843843A (en) | 2023-10-03 |
Family
ID=88162321
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310244752.1A Pending CN116843843A (en) | 2023-03-14 | 2023-03-14 | March road line three-dimensional scene simulation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116843843A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040027344A1 (en) * | 2002-03-27 | 2004-02-12 | Sony Corporation | Three-dimensional terrain-information generating system and method, and computer program therefor |
CN104318617A (en) * | 2014-10-17 | 2015-01-28 | 福建师范大学 | Three-dimensional geographical scene simulation method for virtual emergency exercises |
CN106355643A (en) * | 2016-08-31 | 2017-01-25 | 武汉理工大学 | Method for generating three-dimensional real scene road model of highway |
JP2020013351A (en) * | 2018-07-18 | 2020-01-23 | 株式会社パスコ | Three-dimensional map generation device and program |
CN115203460A (en) * | 2022-07-05 | 2022-10-18 | 中山大学·深圳 | Deep learning-based pixel-level cross-view-angle image positioning method and system |
-
2023
- 2023-03-14 CN CN202310244752.1A patent/CN116843843A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040027344A1 (en) * | 2002-03-27 | 2004-02-12 | Sony Corporation | Three-dimensional terrain-information generating system and method, and computer program therefor |
CN104318617A (en) * | 2014-10-17 | 2015-01-28 | 福建师范大学 | Three-dimensional geographical scene simulation method for virtual emergency exercises |
CN106355643A (en) * | 2016-08-31 | 2017-01-25 | 武汉理工大学 | Method for generating three-dimensional real scene road model of highway |
JP2020013351A (en) * | 2018-07-18 | 2020-01-23 | 株式会社パスコ | Three-dimensional map generation device and program |
CN115203460A (en) * | 2022-07-05 | 2022-10-18 | 中山大学·深圳 | Deep learning-based pixel-level cross-view-angle image positioning method and system |
Non-Patent Citations (3)
Title |
---|
冷冕冕;孙少斌;: "***三维可视化及二/三维互动映射研究", 微计算机信息, no. 07, 5 March 2010 (2010-03-05), pages 173 - 175 * |
孔庆杰;史文欢;刘允才;: "基于GPS轨迹的矢量路网地图自动生成方法", 中国科学技术大学学报, no. 08, 15 August 2012 (2012-08-15), pages 623 - 647 * |
曾安里;张建康;王国宏;高雨青;: "军用三维视景仿真引擎研究", 指挥信息***与技术, no. 05, 28 October 2010 (2010-10-28), pages 17 - 23 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110136170B (en) | Remote sensing image building change detection method based on convolutional neural network | |
CN108596101B (en) | Remote sensing image multi-target detection method based on convolutional neural network | |
CN113128405B (en) | Plant identification and model construction method combining semantic segmentation and point cloud processing | |
CN110276269B (en) | Remote sensing image target detection method based on attention mechanism | |
CN112712535B (en) | Mask-RCNN landslide segmentation method based on simulation difficult sample | |
CN106373088B (en) | The quick joining method of low Duplication aerial image is tilted greatly | |
US7983474B2 (en) | Geospatial modeling system and related method using multiple sources of geographic information | |
CN107918776B (en) | Land planning method and system based on machine vision and electronic equipment | |
CN110866531A (en) | Building feature extraction method and system based on three-dimensional modeling and storage medium | |
CN111738111A (en) | Road extraction method of high-resolution remote sensing image based on multi-branch cascade void space pyramid | |
CN106326858A (en) | Road traffic sign automatic identification and management system based on deep learning | |
CN110956187A (en) | Unmanned aerial vehicle image plant canopy information extraction method based on ensemble learning | |
CN110796152A (en) | Group building earthquake damage extraction method and system based on oblique photography | |
CN108304593B (en) | Method for interactive display of paper map and electronic map | |
CN111814771B (en) | Image processing method and device | |
KR20110127202A (en) | Fusion of a 2d electro-optical image and 3d point cloud data for scene interpretation and registration performance assessment | |
CN111738113A (en) | Road extraction method of high-resolution remote sensing image based on double-attention machine system and semantic constraint | |
CN109308451A (en) | A kind of high score data information extraction system and method | |
CN111640116A (en) | Aerial photography graph building segmentation method and device based on deep convolutional residual error network | |
CN107437237B (en) | A kind of cloudless image synthesis method in region | |
CN114627073B (en) | Terrain recognition method, apparatus, computer device and storage medium | |
CN113129248B (en) | Island remote sensing image set obtaining method, device, equipment and medium | |
CN116977750B (en) | Construction method and classification method of land covering scene classification model | |
JP2020091640A (en) | Object classification system, learning system, learning data generation method, learned model generation method, learned model, discrimination device, discrimination method, and computer program | |
US20230162432A1 (en) | System and method for translating a 2d image to a 3d image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |