CN114241024A - Artificial neural network building texture mapping method and system based on sliding edge detection - Google Patents

Artificial neural network building texture mapping method and system based on sliding edge detection Download PDF

Info

Publication number
CN114241024A
CN114241024A CN202111324125.6A CN202111324125A CN114241024A CN 114241024 A CN114241024 A CN 114241024A CN 202111324125 A CN202111324125 A CN 202111324125A CN 114241024 A CN114241024 A CN 114241024A
Authority
CN
China
Prior art keywords
building
remote sensing
image
roof
contour
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111324125.6A
Other languages
Chinese (zh)
Other versions
CN114241024B (en
Inventor
刘俊伟
杨文雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Terra It Technology Beijing Co ltd
Original Assignee
Terra It Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Terra It Technology Beijing Co ltd filed Critical Terra It Technology Beijing Co ltd
Priority to CN202111324125.6A priority Critical patent/CN114241024B/en
Publication of CN114241024A publication Critical patent/CN114241024A/en
Application granted granted Critical
Publication of CN114241024B publication Critical patent/CN114241024B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/49Analysis of texture based on structural texture description, e.g. using primitives or placement rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an artificial neural network building contour extraction and texture mapping method based on sliding edge detection, which comprises the following steps: s1, determining a plurality of unified geographic coordinate systems among a plurality of remote sensing image maps and corresponding aviation LIDAR point cloud maps; s2, using an initial sliding rectangular frame F to perform sliding scanning in each remote sensing image in a training setDetermining interested contour FOI, establishing building contour extraction model M and final sliding frame Ff(ii) a S3, utilizing the final sliding frame FfObtaining a predicted building Profile PfRegistering the remote sensing image of the building outline to be extracted and the corresponding aviation LIDAR point cloud image according to a coordinate system, and predicting the building outline PfMapping into a corresponding aerial LIDAR point cloud map; s4, inputting the registered building outline and a plurality of building roof images pic2 in the building outline into a pre-established mapping model S to obtain corresponding building roof texture classification, finding texture patterns in a mapping library, and filling the texture patterns into the building outline to finish mapping. The method simplifies the algorithm of the front end of the contour extraction network, accurately extracts the building contour, realizes the rapid registration of the remote sensing image map and the corresponding aviation LIDAR point cloud map and remote sensing infrared map by adopting the registration of a coordinate system, and realizes the accurate roof texture mapping by respectively modeling and identifying the type of the roof material by using RGB and remote sensing infrared.

Description

Artificial neural network building texture mapping method and system based on sliding edge detection
Technical Field
The invention relates to an artificial intelligence extraction scheme of a building outline, in particular to an artificial neural network building texture mapping method and an artificial neural network building texture mapping system based on sliding edge detection, and belongs to the field of artificial intelligence image processing.
Background
The extraction of the building outline is an important step for identifying the building in the urban digital map, and most of the existing extraction schemes adopt multi-step artificial network training to obtain the extraction of a feature map and a frame, so that the minimum accurate identification is realized through a regression algorithm of the frame compared with the artificial marking. The CNN network backbone is mainly used at the front end of the network, and comprises VGGNet, MobileNet, ResNet and the like to form feature extraction, frame features can be extracted only through an FPN or RPN algorithm, and the complex artificial network algorithm process of feature identification and frame extraction is carried out. Meanwhile, the identification of the frame in the network training stage depends on the marking of the artificial frame. Thus, in order to obtain an accurate network, work cannot be done without manual labeling.
And finally, carrying out regression statistics on the frame through a neural network to make more accurate prediction. The multi-layer RNN algorithm of the convolution long-short term memory ConvLSTM is also used for predicting the building vertexes, so that a more accurate building outline is obtained.
However, these algorithms are not only trained in the front end using a complex artificial network, but are actually rough steps of the frame. Due to the fact that the uniform rectangular anchors are based on the network rear end, redundant ground parts can be miscalculated in building frames for special-shaped buildings, particularly buildings with special-shaped semi-surrounding structures such as L-shaped conjoined buildings or concave characters, return characters and the like. While the scheme for predicting the points can accurately predict the building vertexes, at least two defects are caused because the building contour parts between the points are uniformly regarded as straight line connection, firstly, the building contour between the vertexes always has errors because the edge of the building is not a nearly geometrical straight line as in the drawing and always has certain errors, and secondly, the building contour with the arc edge is obviously wrong by using a straight line model, especially a dome building cannot realize accurate contour prediction by using a rectangular box, and cannot find the vertexes (actually, because a circle can be regarded as an approximate infinite polygon, a countless number of vertexes are actually available, and even the calculation in a limited time cannot be realized). Although most of modern buildings are rectangular, special roofs still exist, so that the prior art cannot fully realize that one algorithm is suitable for all roofs.
The aviation LIDAR point cloud technology is an efficient and high-precision data form of an aviation three-dimensional image, but because of image blurring, the representation of building details cannot be truly and visually generated. Even in the architectural contour extraction technology in this field, it is necessary to perform contour extraction after performing gray scale processing and DSM image processing on point cloud data, and therefore contour extraction has not been performed based on the LIDAR point cloud itself. Failure to perform contour extraction based on the LIDAR point cloud itself is still caused by point cloud data image blurring. Building contour extraction based on sharp remote sensing images is therefore an alternative (artificial network algorithm as described earlier). However, the clear visualization of the point cloud data can be realized only by extracting the architectural outlines in the LIDAR point cloud and the remote sensing image respectively and then registering and giving the outlines and the optical information, and after a series of processing, although the accuracy purpose is ensured, the overall working efficiency is reduced on the basis of the high efficiency purpose of the aviation LIDAR point cloud technology, so that the purpose is deviated.
Disclosure of Invention
The invention aims to solve the technical problems and provides an optimized and simple building contour extraction algorithm without manual marking training frame recognition (frame regression prediction is still needed), and the idea is that on the basis of utilizing the clear image advantage of a remote sensing image map and establishing a city coordinate system working in earlier stages, the building contour and optical information obtained by the optimization algorithm based on the remote sensing image map can be quickly registered and mapped in the aviation LIDAR cloud point map without performing any image technical processing on the aviation LIDAR cloud point map, so that the accurate and efficient data obtaining result is realized. The optimization mainly comes from the fact that a front-end network in the prior art is replaced by a novel frame extraction scheme which is efficient and simple under the thought and does not need manual marking training frame recognition.
Based on the consideration, the invention provides an artificial neural network building contour extraction and texture mapping method based on sliding edge detection, which is characterized by comprising the following steps:
s1, obtaining a plurality of remote sensing image maps of at least one city and corresponding aviation LIDAR point cloud maps, and determining a plurality of unified geographical coordinate systems between the plurality of remote sensing image maps and the corresponding aviation LIDAR point cloud maps;
s2, determining an initial sliding rectangular frame F with a length L and a width W0Using an initial sliding rectangular frame F to perform sliding scanning on the area of each remote sensing image in a training set, determining an interested contour FOI in the sliding scanning process, and establishing a building contour extraction model M by using a training set and a verification set to obtain a final sliding frame Ff
S3, obtaining the remote sensing image of all the building outlines to be extracted in the training set, the verification set and the test set, and utilizing the final sliding frame FfObtaining the predicted building outline P by the building outline extraction model MfRegistering the remote sensing image of the extracted building outline and the corresponding aviation LIDAR point cloud image according to a coordinate system, and predicting the building outline PfMapping into a corresponding aerial LIDAR point cloud map;
s4, obtaining a building outline in the remote sensing image after the centralized registration of the training and a plurality of building roof images pic1 inside the remote sensing image, establishing a building chartlet model S by utilizing a building roof texture map, inputting the building outline in at least 1 remote sensing image after the centralized registration of the testing and at least 1 building roof image pic3 inside the remote sensing image into the S to obtain corresponding building roof texture classification, finding out corresponding texture patterns in a chartlet library, and filling pattern textures into the building outline in the aviation LIDAR point cloud map after the centralized registration of the testing and the inside of the building outline to finish chartlet.
About S1
S1 specifically includes:
s1-1, acquiring a plurality of remote sensing images of 1-660 cities and corresponding aerial LIDAR point cloud images, wherein the corresponding aerial LIDAR point cloud images are acquired at the same time as the plurality of remote sensing images, and acquiring at least 1 remote sensing image with a complete preset standard building and the corresponding aerial LIDAR point cloud image, and preferably, the number of the plurality of remote sensing images is 540-;
preferably, a plurality of remote sensing image maps of 1-660 cities, corresponding aviation LIDAR point cloud maps and corresponding infrared remote sensing images are acquired at the same time.
S1-2, establishing a unified urban geographic coordinate system E in 1 remote sensing image with a complete preset standard building and the corresponding aerial LIDAR point cloud image, forming a plurality of image groups of each group of two images by all the rest multiple remote sensing images and the corresponding multiple aerial LIDAR point cloud images, and determining the unified coordinate system E with the same origin in the 1 remote sensing image in each image group and the corresponding 1 aerial LIDAR point cloud image according to the positive north direction of the geography(i)And i is 1,2, and N (a Z axis can be added to form a stereo coordinate system), wherein N is the magnitude of the remote sensing image or the corresponding aviation LIDAR point cloud image, and the proportion of the plurality of remote sensing image and the corresponding aviation LIDAR point cloud image which establish the coordinate system to be respectively divided into a training set, a verification set and a test set is 100-50: 10-5:3-1, preferably 50:9: 1.
Preferably, the predetermined standard building has a rectangular roof frame, and a vertical projection of one vertex on the ground is selected as the coordinate origin O, and a ground plane rectangular coordinate system E is formed with a vertical projection of one side on the ground as the X-axis and a vertical projection of the other side on the ground as the Y-axis (a stereoscopic coordinate system E may also be formed with the Z-axis). More preferably, the X-axis points directly east and the Y-axis directly north.
Determining a unified coordinate system E with the same origin in all the plurality of remote sensing images and each of the corresponding aerial LIDAR point clouds according to the positive north direction of the geography(i)And i is 1,2, N, specifically including determining an angle α between an X axis of the coordinate system E and the true north, placing an origin of the coordinate system E at a preset point in the same position under E in one remote sensing image in each image group and 1 corresponding aviation LIDAR point cloud image, and adjusting the X-axis orientation to make the angle α between the X-axis orientation and the true north, thereby obtaining a plurality of coordinate systems E with uniform X-axis orientation in one remote sensing image in each image group and 1 corresponding aviation LIDAR point cloud image(i)I is 1,2, a, N. Preferably, when said X-axis of E points to the east and the Y-axis to the north, adjusting the X-axis to point to the east or adjusting the Y-axis to point to the north.
Preferably, a corresponding infrared remote sensing map is also added to each map group, and the unified urban geographic coordinate system E and the unified coordinate system E with the same origin are respectively established according to the existence of a complete preset standard building(i)I-1, 2, a, N and added to the training set, validation set, and test set accordingly.
It can be understood that, when 1 remote sensing image map in the image group or 1 corresponding aviation LIDAR point cloud map belongs to the training set or the verification set or the test set, the corresponding 1 infrared remote sensing image map is also added to the corresponding training set or the verification set or the test set.
About S2
S2 specifically includes:
s2-1 determining an initial slip rectangle F having a length L and a width W0Preferably, the value ranges of L and W are R epsilon [ nr,2nr]Wherein r is the resolution of the remote sensing image map, and n is the [10,20 ]]In one embodiment, L ═ W.
S2-2, obtaining RGB tristimulus values of pixels inside a predetermined number of roofs and near (e.g., within one third of the distance from a point on the contour to the geometric center or centroid of the contour) contours in each of the remote sensing images in the training set, determining a preset time interval T, and using an initial sliding rectangular frame F0In each remote sensing image in the training set, all areas of each remote sensing image are subjected to slippage scanning from zero time, and in the slippage scanning process, an initial slippage rectangular frame F is determined every T time0To form a profile of interest FOI, and obtaining a feature map PFOI of the profile of interest FOI containing the building profile when the scanning is completed.
Preferably, the predetermined number is 4-24 for each building, preferably 3 for each contour edge of each building, and the T is 0.1-1 s.
It will be appreciated that all FOIs within the region will be determined after a limited time of slip scan,
preferably, the swipe scan is a machine scan, i.e. from an initial swipe rectangular frame F0(or the intermediate slip rectangle F hereinafter)m) After the right-angle sides are aligned with two right-angle sides at any corner of the remote sensing image, starting line or column sliding scanning, and then translating the remote sensing image in the width or length direction to form an initial sliding rectangular frame F0(or the intermediate slip rectangle F hereinafter)m) Width, continuing to slide the initial rectangular frame F0(or the intermediate slip rectangle F hereinafter)m) After the width or length direction edge is aligned with the width or length direction edge of the remote sensing image picture, new line scanning is carried out until the area scanning is finished, wherein the line sliding scanning is constant-speed scanning, and an initial sliding rectangular frame F0(or the intermediate slip rectangle F hereinafter)m) Exactly one initial slipping rectangular frame F slips over the preset time interval T in the line scanning direction0(or the intermediate slip rectangle F hereinafter)m) Length or width of.
More preferably, at every T timeDetermining an initial slip rectangle F0After the profile FOI in (1), an initial sliding rectangular frame F is arranged in a scanned remote sensing image0(or the intermediate slip rectangle F hereinafter)m) And setting the area as a scanned area, filling the scanned area with gray scale or color to prompt, and deleting the filled gray scale or color after all the scanned areas are finished.
It can be understood that such a shuffle scan can be manually scanned on the computed and displayed remote sensing image by using a shuffle frame, and the scanned partial area of the determined outline can be seen, so that the non-filled area can be guided to be manually scanned without omission, and therefore, any shuffle direction and starting point selection can be performed by using such a scan, and as long as the area without the shuffle and the outline determined at the time T are all remote sensing image original image parts, the filled gray scale or color is not available. Especially, when the final sliding scanning frame is intensively used for determining the building outline in the later test, the remote sensing image of the building outline to be determined can be conveniently scanned in any sliding mode.
Wherein an initial slip rectangular frame F is determined every T moments0The forming of the profile of interest FOI specifically comprises:
s2-2-1, obtaining RGB tristimulus values classification cluster map of different roof materials through clustering algorithm, obtaining RGB tristimulus values in initial sliding rectangular frame F every T time from zero time, setting RGB with tristimulus values not in the cluster of RGB tristimulus values of city roof as first gray value or color value, setting the rest as second gray value or color value of another different value, or setting RGB tristimulus values in the cluster of RGB tristimulus values of city roof and distance (namely, distance) between each point of cluster and each point of cluster (although all are outside cluster)
Figure BDA0003346346570000031
) The RGB three values of the smallest one in the threshold range are all set to a first gray value or color value and the remaining portions are all set to a second gray value or color value of another different value to form a binarized map, preferably, the threshold range is 0-25, or,
determining the RGB distribution range of the roof according to the obtained RGB of the roof, and acquiring an initial sliding rectangular frame F at intervals of T from zero time0And setting the RGB three values as a first gray value or a color value when at least one of the RGB three values is not in the RGB distribution range of the roof and the difference value with the value in the range is the smallest and exceeds a preset threshold, and setting the rest parts as a second gray value or a color value with another different value. Preferably, the threshold range is 25-254;
s2-2-2, identifying the contour in the binary image by using edge detection, preferably, the edge detection includes canny edge detection or Sobel edge detection, or determines the contour by scanning point by point, which specifically includes:
scanning each pixel in the binary image line by line, and setting the pixel as a contour point when encountering an RGB three-value mutation pixel, thereby traversing all the pixels in the initial sliding rectangular frame F to complete the determination of the building contour at the time T;
preferably, the determined contour points are set to take a third grey or colour value.
S2-2-3, after obtaining the binary image or after scanning the remote sensing image and completely binarizing the remote sensing image, performing binarization removal processing to recover the remote sensing image part and retain the identified outline.
It can be understood that, for the interior of the building roof, the interior is generally uniform and the same material, so the interior can be approximately regarded as the RGB three values are equal, and for the parts outside the outline, generally roads, green belts, and people and objects therein, the material composition of the parts is greatly different from the building roof material, so the visible spectrum difference of the material is large, and the parts are also greatly different reflected on the RGB three values, so that the existing outline in the sliding frame at intervals of T and the part of the building roof inside the sliding frame can be efficiently identified at one time based on binarization and edge detection, and the characteristic identification is not performed by using an artificial network, and the roof and the outline can be identified in the passing area only by sliding (outline rough extraction process). Because the three roof RGB values are all taken from the edge close to the contour, the RGB values near the contour can be accurately reflected, and a more accurate segmentation threshold value is provided for binarization.
For the clustering algorithm, the material components of the roof are classified clearly, such as concrete, brick and tile, colored glaze, ceramic, high polymer material, asphalt, and the like. Typically each having its own respective industry standard. Therefore, these materials generally have a relatively significant high density (i.e., not high dispersion between points) clustering characteristic on RGB for the same city reflection, and thus the clustering approximation is considered to be a spherical clustering of the RGB space. Thus, the determination of absence from the interior of a cluster can be simplified to a determination based on distance from the center of the cluster, or approximate sphere center. A cluster is considered to be not within a cluster when the distance from the cluster center or sphere center is greater than the maximum distance of the point within the cluster from the cluster center, or approximate sphere center.
S2-3, inputting PFOI into RoiAlign layer, obtaining current prediction box through full connection layer, calculating contour error loss by using current prediction box and artificial mark building contour, and calculating rectangular contour
Figure BDA0003346346570000041
Wherein P, Q are predicted contours and artificially labeled contours, P, respectivelyi-QiIndicating the error between the predicted contour and the artificially marked contour (which may be the average gap value between the contours, and may be any number of points on the contour to be arithmetically averaged, the same applies below), x, y being the coordinates of one corner vertex of the artificially marked contour, w1 being the width of the contour, h being the height of the contour, i being the ith building FOI contour,
Figure BDA0003346346570000042
shows summing all rectangular building FOI contours in the feature map PFOI, for a circular contour
Figure BDA0003346346570000043
Wherein d isPiAnd
Figure BDA0003346346570000049
diameters, x, of predicted and artificially marked circular outlines of the ith building, respectivelyr,yrAnd d are the circle center coordinates and the diameter of the manually marked circular outline respectively,
Figure BDA0003346346570000044
the centre distance of the predicted circular outline and the artificially marked circular outline for the ith building,
Figure BDA0003346346570000045
it is shown that all circular building FOI contours in all pairs of feature maps PFOI are summed,
for error loss of other special-shaped contours, the minimum circumscribed rectangle is adopted and calculated by using a formula (1)
Figure BDA0003346346570000046
(3) Wherein P 'and Q' are respectively a predicted contour minimum bounding rectangle and an artificially marked contour minimum bounding rectangle, Pi'-Qi' represents an error between the predicted contour minimum bounding rectangle and the artificially marked contour minimum bounding rectangle, x ', y ' are coordinates of one corner vertex of the artificially marked contour minimum bounding rectangle, w1' is a width of the minimum bounding rectangle, h ' is a height of the minimum bounding rectangle,
Figure BDA0003346346570000047
it is shown that all other odd-shaped building FOI contours in the feature map PFOI are summed,
wherein the minimum bounding rectangle is determined by predicting the building vertex by utilizing a multi-layer RNN algorithm of RoiAlign layer + full connection layer FC and convolution long-short term memory ConvLSTM,
adjusting network parameters by using error loss back propagation, performing frame regression to correct the prediction result, and obtaining the change rate of the loss value
Figure BDA0003346346570000048
When the predicted contour approaches the preset threshold thres (thres < 2-5%), the intermediate predicted contour P is considered to be obtainedmAnd an intermediate model MmWherein L is(j+1)And L(j)Error losses of the j +1 th training and the j training respectively;
s2-4 adjusting initial sliding rectangular frame F0Length L and width W of the frame to form a middle sliding rectangular frame FmRepeating the steps S2-2-S2-3 to obtain a plurality of intermediate prediction profiles Pm (1),Pm (2),*a,Pm (K)And a corresponding plurality of intermediate models Mm (1),Mm (2),…,Mm (K)Where K is the number of repetitions, preferably the adjustment is to increase and/or decrease L and W in steps adjusted by nr, where r is the resolution of the telemetric image map, n ∈ [10,20 ]],K∈[1,10]Obtaining the middle sliding rectangular frame and the model corresponding to the minimum error of the K +1 predicted profiles and the artificial profiles, or removing the L and W of the middle sliding rectangular frame corresponding to the residual predicted profiles with the maximum and minimum errors when the K is more than or equal to 3, and averaging the network parameters to be used as the final sliding frame F when the training is finishedfAnd a model M for verifying the error between the predicted contour and the manual mark box by using the verification set.
It can be understood that the latter can obtain a model with a larger sliding frame and more accurate prediction on a certain probability, and when the remote sensing image of the profile to be detected is scanned, a large-area sliding frame can obtain a faster scanning completion speed.
About S3
S3 specifically includes:
s3-1, obtaining the remote sensing image of all the building outlines to be extracted in the training set, the verification set and the test set, and utilizing the final sliding frame FfPerforming slippage scanning on the remote sensing image maps of all the building outlines to be extracted to obtain a predicted outline P in a PFOI input model Mf
It will be appreciated that scanning through portions of urban roads while PFOI acquisition may present vehicles or other framed objects, possibly misidentified as building outlines, which need only be subtracted out at the end to obtain pure building predicted outlines.
S3-2 having predicted building profile P in training set, validation set and test setfRegistering the remote sensing image with corresponding aviation LIDAR point cloud image according to respective coordinate system, and predicting the building outline PfPredicted building profile P of remote sensing image mapfMapping into corresponding aerial LIDAR point cloud picture to form profile layer PfL, wherein the registering refers to the training set, the verification set and the test set with the predicted building outline PfThe remote sensing image and the corresponding aviation LIDAR point cloud image are registered according to the superposition of the origin of the respective coordinate system and the X axis or the Y axis, or the Z axis is superposed to enable the X axis or the Y axis to be parallel to each other.
Preferably, the corresponding infrared remote sensing map and the predicted building outline P are mapped in S3-2fThe aerial LIDAR point cloud pictures are registered according to respective coordinate systems, and the corresponding infrared remote sensing pictures are mapped to the predicted building outline PfAnd forming a remote sensing infrared layer IRL in the aerial LIDAR point cloud picture.
In one embodiment, S3-2 also focuses the test with the predicted building profile PfThe optical information in the remote sensing image is mapped to the corresponding aviation LIDAR point cloud image registered with the optical information to form a remote sensing image layer RL.
About S4
S4 specifically includes:
s4-1, acquiring a building contour in a remote sensing image after registration in a training set and a plurality of building roof images pic1 inside the building contour, numbering the building contour and the building roof images pic1, acquiring average RGB three values of a plurality of points inside the image, preferably selecting weighted average calculation, wherein the weight value is smaller at the position closer to a predicted contour, the distance from the internal point to an optional contour edge is divided into lp lengths, and the weight value is changed from 50% of the internal point to the contour edge in a linear decreasing mode;
s4-2, establishing an RGB cluster distribution map of building roof materials corresponding to the building roof images pic1, wherein the building roof materials comprise concrete, bricks, tiles, high polymer materials, solar panel materials and asphalt;
s4-3 centralizes training in UjArchitectural roof image pic1 of the same roofing material(j)And corresponding RGB Tri-value input for building roofing Material to RoiAlign layer pic1(j)Obtaining a prediction classification s through a full connection layer and a softmax function, and calculating the j-th roof material error loss according to the confidence coefficient of the classification q corresponding to the prediction classification s and the input RGB three values in the clustering distribution diagram
Figure BDA0003346346570000051
Wherein the content of the first and second substances,
Figure BDA0003346346570000052
k-numbered architectural rooftop image of jth roofing material
Figure BDA0003346346570000053
The number of classes corresponding to class q in the cluster distribution map is determined by
Figure BDA0003346346570000054
Only one kind of roof is built in the building, so
Figure BDA0003346346570000055
Then equation (4) is reduced to
Figure BDA0003346346570000056
Wherein p isjA training set representing a jth roofing material,
Figure BDA0003346346570000057
k-numbered image of the roof of a building representing the softmax function
Figure BDA0003346346570000058
With respect to the confidence level of the class q,
Figure BDA0003346346570000059
as a roof image of a building numbered k
Figure BDA00033463465700000510
Obtaining the Q-th vector value of the Q x 1-dimensional vector corresponding to the classification Q in the clustering distribution diagram at the full-connection layer, wherein Q is the total number of roof materials, and Q is more than or equal to 2,
Figure BDA00033463465700000511
and
Figure BDA00033463465700000512
representing the sum of the total number of roofing materials Q and all numbers in the training set for the jth roofing material, respectively, log () represents the logarithm to the base e,
adjusting network parameters by using error loss back propagation, performing classification regression to correct prediction results, and determining the change rate of the loss value
Figure BDA00033463465700000513
Wherein L is(l+1)And L(l)When the error loss of the (l + 1) th training and the (l) th training respectively approaches to the preset threshold thres range (thres is less than 2-5%), the confidence coefficient of the prediction classification of the j-th roof material corresponding to the classification q in the clustering distribution diagram is considered to be obtained
Figure BDA0003346346570000061
Wherein
Figure BDA0003346346570000062
The roof image of the building numbered k at this time
Figure BDA0003346346570000063
Obtaining the Q x 1 dimension vector Q vector value when the Q corresponds to the classification Q in the cluster distribution diagram at the full connection layer, and obtaining the intermediate classification prediction model when the j roof material corresponds to the classification Q in the cluster distribution diagram
Figure BDA0003346346570000064
S4-4 changing the corresponding classification of the j-th roofing material in the cluster distribution map, repeating the step S4-3, so as to obtain the Q types of roofing material by traversing all the roofing typesConfidence of class
Figure BDA0003346346570000065
And a corresponding plurality of intermediate classification prediction models
Figure BDA0003346346570000066
Selecting a confidence maximum
Figure BDA0003346346570000067
The corresponding intermediate classification prediction model is used as the j-th roof material final classification model Sjf
S4-5 selection of U for another roofing materialj+1Building roof image pic1(j+1)Repeating the steps S4-3-S4-4, traversing all Q kinds of roofing materials to obtain a final roofing material classification model Sf={S1f,S2f,…,SQf}; building a classification model of the roof material and a texture map P of the building roof materialstrS of the samef→PstrObtaining a mapping model S as writing
Figure 998174DEST_PATH_234567DEST_PATH_IMAGE047
Verifying by using a building outline in the remote sensing image map after the centralized verification and the registration and a plurality of building roof images pic2 inside the building outline;
s4-6 testing centralization obtaining at least 1 registered remote sensing image picture, obtaining a plurality of building roof images pic3 including building outline and interior thereof to be classified, substituting pic3 into SfObtaining a model S corresponding to the maximum value in Q confidence coefficientsmaxfThe corresponding material is used as the predicted material corresponding to the roof in the input pic3, and the corresponding chartlet model F is usedmaxf(Smaxf) Obtaining a texture map P of the roof material of the buildingmaxstrFinding out corresponding texture pattern in the mapping library, filling the texture of the texture pattern into the outline of corresponding building in the aeronautical LIDAR point cloud picture correspondingly registered by the at least 1 registered remote sensing image picture acquired in the test set to form a mapping layer PmaxstrAnd L, completing mapping.
Preferably, the step S4-5' of establishing the roofing material infrared classification model S between the normalized intensity of the characteristic wave band in the infrared remote sensing maps of the different material roofing species and the quality roofing species is further included between S4-5 and S4-6remThe method specifically comprises the following steps:
carrying out principal component analysis clustering analysis on roofs of different materials corresponding to FOIs of a plurality of buildings in the registered remote sensing infrared image in the training set to obtain a load image, obtaining characteristic wave bands sensitive to the roofs of the different materials, and establishing a roof material infrared classification model S based on the artificial neural network by using the normalized strength of the characteristic wave bands as the input end of the neural networkremAnd verifying by using a verification set;
at this time, pic3 was substituted into S4-6fObtaining a model S corresponding to the maximum value in Q confidence coefficientsmaxfThe corresponding material is used as the predicted material classification s corresponding to the roof in the input pic3fThen, acquiring the normalized intensity of the characteristic wave band in the infrared remote sensing image corresponding to pic3 and inputting the normalized intensity into the infrared classification model S of the roof materialremTo obtain a classification s of the roofing materialremfIf s isf=sremfThen use the corresponding chartlet model Fmaxf(Smaxf) Obtaining a texture map P of the roof material of the buildingmaxstrFinding out corresponding texture pattern in the mapping library, filling the texture of the texture pattern into the outline of corresponding building in the aeronautical LIDAR point cloud picture correspondingly registered by the at least 1 registered remote sensing image picture acquired in the test set to form a mapping layer PmaxstrL completes the mapping, and on the contrary,
sorting roofing materials sremfCorresponding roofing material classification model SremfObtaining a texture map P of the roof material of the buildingremf=Fremf(Sremf) Wherein the subscript rem represents the classification corresponding to the classification model of the roofing material via the infrared classification model S of the roofing materialremFinding out corresponding texture patterns in a mapping library, filling the texture patterns into the outline of the corresponding building in the aerial LIDAR point cloud picture correspondingly registered by the at least 1 registered remote sensing image picture obtained in the test set to form the mappingLayer PremfAnd L, completing mapping.
Preferably, the characteristic waveband map in the infrared remote sensing map corresponding to pic2 is obtained by obtaining remote sensing infrared spectra of corresponding points in a plurality of building outlines from pic2, obtaining an average spectrum, and obtaining the characteristic waveband map based on the average spectrum. The average spectrum is an arithmetic average of the integrated intensities of the spectral peaks of the respective bands or a weighted average of the integrated intensities of the spectral peaks of the respective bands in accordance with the weights of the respective bands.
The invention also provides an artificial neural network building contour extraction and texture mapping system based on sliding edge detection, which is characterized by comprising an aerial remote sensing system, a ground server and a client, wherein data communication can be realized between the aerial remote sensing system and the ground server and between the ground server and the client,
the aerial remote sensing system comprises a satellite positioning device, a satellite remote sensing image shooting device, an aerial LIDAR point cloud system and a remote sensing infrared shooting device, which are used for synchronously shooting a ground visible light band remote sensing image, obtaining a remote sensing image picture, acquiring aerial LIDAR point cloud data, acquiring a remote sensing infrared picture and acquiring an infrared spectrum,
the ground server processes the remote sensing image map, the aerial LIDAR point cloud data acquisition and the remote sensing infrared spectrum to complete building contour extraction and texture mapping based on the sliding edge detection artificial neural network building contour extraction and texture mapping method, and sends building contour extraction and texture mapping results to a user according to the request of the user,
the client can display the received result.
The present invention also provides a non-transitory storage medium having stored therein a computer readable program executable by a ground server to implement the aforementioned sliding edge detection based artificial neural network building contour extraction and texture mapping method.
Advantageous effects
1. The algorithm based on the sliding edge detection is used at the front end of the network to replace an artificial network model to form a characteristic diagram with a crude extraction profile, so that the algorithm is simplified, the building profile is accurately extracted,
2. the registration of the coordinate system is adopted to realize the rapid registration of the remote sensing image map and the corresponding aviation LIDAR point cloud map and the remote sensing infrared map,
3. and the RGB and remote sensing infrared are adopted to respectively model and identify the types of the roof materials, so that accurate roof texture mapping is realized.
Drawings
FIG. 1 is a schematic diagram of a process for establishing a unified coordinate system between a remote sensing image map, a corresponding aerial LIDAR point cloud map, and a corresponding infrared remote sensing image map,
figure 2 graph of RGB clustering results for concrete and black tiles,
FIG. 3 shows that the initial sliding frame F starts a certain time in the process of sliding and scanning the machine train from the time 0 at the lower left corner of the remote sensing image in the city A,
figure 4 is a schematic diagram of the acquisition process of the minimum bounding rectangle,
FIG. 5 is a schematic view of registration and information mapping between remote sensing images, corresponding aerial LIDAR point clouds, and corresponding infrared remote sensing images in a unified coordinate system E',
FIG. 6 is a schematic diagram of the building roof material identification classification model and the mapping model S,
FIG. 7 is a LIDAR point cloud map layer DL in which a plurality of building roofs have been mapped.
Detailed Description
Example 1
S1-1, in 54000 morning days, synchronously acquiring 1 remote sensing image map of city A, 1 corresponding aviation LIDAR point cloud map and 1 corresponding infrared remote sensing image map at each moment, wherein 54000 multiplied by 3 image data are total. The method comprises the following steps that 1 complete remote sensing image map of a municipal building with a rectangular roof frame, a corresponding 1 aviation LIDAR point cloud map and a corresponding 1 infrared remote sensing image exist;
s1-2, in the presence of 1 complete remote sensing image of the roof frame municipal building with the rectangle, 1 corresponding aviation LIDAR point cloud image and 1 corresponding infrared remote sensing image, selecting the projection of one vertex in the image of the 3 images of the roof frame municipal building with the rectangle on the ground as a coordinate origin O, and forming a ground plane rectangular coordinate system E by taking the vertical projection of one side on the ground as an X axis and the vertical projection of the other side on the ground as a Y axis.
Determining an included angle alpha between a coordinate system E and the positive north direction, placing an origin of the coordinate system E on a corresponding point of the center of 1 remote sensing image in each image group (namely the same point in the 1 aerial LIDAR point cloud image and the 1 remote sensing infrared image corresponds to the corresponding point of the center of the 1 remote sensing image and belongs to the same position point under E) in the 1 remote sensing image group consisting of all the rest multiple remote sensing images, the corresponding multiple aerial LIDAR point cloud images and the remote sensing infrared images, and adjusting the X-axis direction to enable the included angle alpha to be obtained, so as to obtain multiple coordinate systems E pointed by the unified X-axis in the 1 remote sensing image group, the 1 corresponding aerial LIDAR point cloud image and the 1 aerial LIDAR point cloud image in each image group(i)And i is 1,2, …, N, the proportion of 54000 × 3 images with established coordinate systems is 50:9:1 (all processes are shown in fig. 1) into a training set, a validation set and a test set.
Example 2
S2 specifically includes:
s2-1 determining an initial slip rectangular frame F having a length of 30r and a width of 30r0
S2-2, obtaining pixel RGB tristimulus values of a preset number of positions, close to the outline, inside the roof in each remote sensing image in the training set, determining a preset time interval of 1S, performing slippage scanning on all areas of each remote sensing image in the training set from zero time by using an initial slippage rectangular frame F, determining the outline in the initial slippage rectangular frame F at intervals of 1S in the slippage scanning process to form an interested outline FOI, and obtaining a feature diagram PFOI of the interested outline FOI containing the building outline after scanning is completed.
The method comprises the following steps that the slippage scanning is machine scanning, namely, after the right-angle side of an initial slippage rectangular frame F is aligned with two right-angle sides of any corner of a remote sensing image, the column slippage scanning is started, after the completion, the width of the initial slippage rectangular frame F is translated to the width direction of the remote sensing image, new line scanning is carried out after the width direction side of the initial slippage rectangular frame F is aligned with the width direction side of the remote sensing image continuously until the completion of area scanning, wherein the line slippage scanning is uniform-speed scanning, and the initial slippage rectangular frame F is just slipped by the length r of the initial slippage rectangular frame F when passing through the preset time interval 1s in the line scanning direction.
Or after determining the outline FOI in the initial sliding rectangular frame F at every 1s, setting the area of the initial sliding rectangular frame F in the scanned remote sensing image as white filling for prompting, and deleting the filled white after all areas are scanned.
Wherein the outline in the initial sliding rectangular frame F is determined every 1s, and forming the outline FOI of interest specifically includes:
s2-2-1 obtains a clustering chart (concrete and black tiles are shown in figure 2) of RGB tristimulus value classification (typical RGB values are shown in Table 1) of different roof materials (including square floors) through a clustering algorithm.
TABLE 1 typical RGB values of various roofing materials
Figure BDA0003346346570000081
Acquiring RGB three values in the initial sliding rectangular frame F at intervals of 1s from zero time, setting RGB of the three values which are not in the cluster of the RGB three values of the city roof as a first gray value, setting the rest parts as second gray values of different values, or setting the RGB three values of the three values in the cluster of the RGB three values of the city roof as well as the distance (namely, the distance) between the RGB three values and each point in the cluster, although the RGB three values are outside the cluster
Figure BDA0003346346570000082
) The RGB three values of the smallest one of the three are all set as a first gray value within a threshold range, and the remaining portions are all set as a second gray value of another different value to form a binarized map, the threshold is 25, or,
determining the RGB distribution range of the roof according to the obtained RGB of the roof, and acquiring an initial sliding rectangular frame F every 1s from zero time0And setting the first gray value when at least one of the RGB three values is not in the RGB distribution range of the roof and the difference value between the RGB three values and the numerical value in the range is less than a preset threshold value, and setting the rest part of the RGB three values as a second gray value with another different value. The threshold range is 80;
s2-2-2, identifying the contour in the binarized image by using edge detection, preferably, the edge detection determines the contour for point-by-point scanning, which specifically includes: scanning each pixel in the binary image line by line, and setting the pixel as a contour point when encountering an RGB three-value mutation pixel, thereby traversing all the pixels in the initial sliding rectangular frame F to complete the determination of the building contour at the moment of 1 s;
s2-2-3, after obtaining the binary image, carrying out the binarization removal processing to recover the remote sensing image part and retain the identified outline.
As shown in fig. 3, the initial sliding frame F in the city a is shown at a certain time (> 3s) in the process of starting the machine train sliding scanning from the time 0 at the lower left corner. Where two building parts, two complete side-by-side buildings at the left edge of the image, and a black FOI that is part of an L-shaped profiled building have been identified, the arrows indicate the direction of the column sweep scan. The visible outline reveals already part of the detail, and it seems visually possible to dispense with the block regression step of the back-end network. Therefore, the edge detection scheme of the sliding frame has the advantage of determining the building outline quickly and conveniently. In the figure, D is the road model which is already widened, and B is a place which is marked manually and is far away from the road D and is vacant.
Although other contour points and lines are mistakenly identified in the contour, on one hand, in the RoiAlign process, the real building contour outside the RoiAlign process is used as a candidate region, the corresponding candidate region is pooled into a feature map with a fixed size in the subsequent PFOI according to the position coordinates of the building contour of the candidate region, and for the subsequent frame regression, the contour points and lines are not taken as the frame part and are not included in the error loss calculation, so that the prediction of the contour is not influenced. On the other hand, since these contour points and lines exist inside the building, they are covered when the final map is filled, and therefore, the existence of these contour points and lines is not good for the subsequent map.
S2-3, inputting PFOI into RoiAlign layer, obtaining current prediction box through full connection layer, calculating contour error loss by using current prediction box and artificially marked building contour, and identifying FOI for partial rectangular contour of rectangular building with left edge in FIG. 3
Figure BDA0003346346570000091
Wherein P, Q are predicted contours and artificially labeled contours, P, respectivelyi-QiRepresenting the error between the predicted contour and the artificially marked contour, x, y being one angular vertex coordinate of the artificially marked contour, w1 being the width of the contour, h being the height of the contour, i representing the ith building FOI contour,
Figure BDA0003346346570000092
the method comprises the steps of summing all rectangular building FOI outlines in a characteristic diagram PFOI, manually marking the error loss of two parallel complete special-shaped building outlines by adopting a minimum circumscribed rectangle, and calculating the error loss of the minimum circumscribed rectangle of the special-shaped building outline by using a formula (1)
Figure BDA0003346346570000093
(3) Wherein P 'and Q' are respectively a predicted contour minimum bounding rectangle and an artificially marked contour minimum bounding rectangle, Pi'-Qi' represents an error between the predicted contour minimum bounding rectangle and the artificially marked contour minimum bounding rectangle, x ', y ' are coordinates of one corner vertex of the artificially marked contour minimum bounding rectangle, w1' is a width of the minimum bounding rectangle, h ' is a height of the minimum bounding rectangle,
Figure BDA0003346346570000094
it is shown that all other odd-shaped building FOI contours in the feature map PFOI are summed,
the process of acquiring the minimum circumscribed rectangle is shown in fig. 4, and the left complete special-shaped building outline FOI in parallel enters the full-connection layer FC after being input into the RoiAlign layer curing characteristic area, so that a plurality of predicted vertexes are obtained. And then, 10 times of circulation is carried out on the building vertexes by a multi-layer RNN algorithm of the convolution long-short term memory ConvLSTM to find 10 maximum probability prediction vertexes, so that the minimum bounding rectangle of which the vertexes 1, 3, 4, 5, 9 and 10 are connected to form FOI is obtained.
Then, adjusting network parameters by using error loss back propagation, performing frame regression to correct the prediction result, and when the change rate of the loss value is high
Figure BDA0003346346570000095
When the preset threshold thres < 2% is reached, the intermediate predicted profile P is considered to have been obtainedmAnd an intermediate model MmWherein L is(j+1)And L(j)Error loss for the j +1 th training and the j training, respectively.
Although the minimum circumscribed rectangle is adopted during network parameter adjustment, the abnormal contour is still predicted after FC due to the guarantee of the curing characteristic area of the RoiAlign layer during actual prediction. The minimum bounding rectangle is to facilitate the computation of the penalty function, and the multi-layer RNN algorithm of ConvLSTM also guarantees the accuracy of the minimum bounding rectangle.
S2-4 synchronously adjusting the initial sliding rectangular frame F by 10r steps0Length L and width W of the frame to form a middle sliding rectangular frame FmRepeating the steps S2-2-S2-3 5 times to obtain 5 intermediate predicted profiles Pm (1),Pm (2),Pm (3),Pm (4),Pm (5)And corresponding 5 intermediate models Mm (1),Mm (2),Mm (3),Mm (4),Mm (5)Obtaining the intermediate sliding rectangular frames and the models corresponding to the 6 predicted profiles and the artificial profile with the minimum error as the final sliding frame F when the training is finishedfAnd a model M for verifying the predicted contour and the artificial mark by using the verification setError between boxes.
Example 3
S3-1, obtaining and testing the training set, the verification set and all the remote sensing image maps of the building outline to be extracted in the testing set, and utilizing the final sliding frame FfPerforming slippage scanning on the remote sensing image maps of all the building outlines to be extracted to obtain a predicted outline P in a PFOI input model Mf
S3-2 As in FIG. 5, test set with predicted building Profile PfThe remote sensing image and the corresponding aviation LIDAR point cloud image and the corresponding infrared remote sensing image are overlapped according to the Z axis of the respective coordinate system E' so that the X axis (right direction) is overlapped, and the predicted building outline P is obtainedfPredicted building profile P of remote sensing image mapfMapping into corresponding aerial LIDAR point cloud picture to form profile layer PfL, centralizing the test with the predicted building profile PfThe optical information in the remote sensing image map is mapped to the corresponding aviation LIDAR point cloud map which is registered with the optical information to form a remote sensing image layer RL, and then the corresponding infrared remote sensing image is mapped to the predicted building outline P which is mappedfAnd forming a remote sensing infrared layer IRL in the aerial LIDAR point cloud picture.
Example 4
S4 specifically includes:
s4-1, acquiring building outlines in the remote sensing image after registration in a training set and 9000 building rooftop images pic1 inside the remote sensing image, namely 10 building rooftop images pic1 are taken from each training set image in 900 training sets, selecting buildings close to four corners or edges and the center in the images, numbering the buildings, and acquiring average RGB three values of 5 points (points close to four external rectangular corners and points in the centers of the external rectangular corners) inside the images; s4-2, establishing an RGB cluster distribution map of corresponding building roof materials in 9000 building roof images pic1, wherein the building roof materials mainly comprise four materials, namely concrete, black tiles, asphalt and high-molecular waterproof materials;
s4-3 As shown in FIG. 6, centralizing training in U1Architectural roof image pic1 of the same roofing material(1)And corresponding RGB three-value input of building roofing material to RoiAlign layer will pic1(1)Obtaining a prediction classification s through a full connection layer and a softmax function, and calculating the error loss of the 1 st roof material according to the confidence coefficient of the classification q corresponding to the prediction classification s and the input RGB three values in the clustering distribution diagram
Figure BDA0003346346570000101
Wherein the content of the first and second substances,
Figure BDA0003346346570000102
number k architectural roof image of type 1 roofing material
Figure BDA0003346346570000103
The number of classes corresponding to class q in the cluster distribution map is determined by
Figure BDA0003346346570000104
Only one kind of roof is built in the building, so
Figure BDA0003346346570000105
Then equation (4) is reduced to
Figure BDA0003346346570000106
Wherein p is1A training set of type 1 roofing materials is shown,
Figure BDA0003346346570000107
k-numbered image of the roof of a building representing the softmax function
Figure BDA0003346346570000108
With respect to the confidence level of the class q,
Figure BDA0003346346570000109
as a roof image of a building numbered k
Figure BDA00033463465700001010
Obtaining the qth vector value of the 4 x 1 dimension vector corresponding to class q in the cluster distribution map at the fully connected layer,
Figure BDA00033463465700001011
and
Figure BDA00033463465700001012
representing the sum of the total number of roofing materials Q, and all numbers in the training set for type 1 roofing materials, respectively, log () represents the logarithm to the base e,
adjusting network parameters by using error loss back propagation, performing classification regression to correct prediction results, and determining the change rate of the loss value
Figure BDA00033463465700001013
Wherein L is(l+1)And L(l)When the error loss of the (l + 1) th training and the (l) th training respectively approaches to the preset threshold thres range (thres is less than 2-5%), the confidence coefficient of the predicted classification of the 1 st type of roof material corresponding to the classification q in the clustering distribution diagram is considered to be obtained
Figure BDA00033463465700001014
Wherein
Figure BDA00033463465700001015
The roof image of the building numbered k at this time
Figure BDA00033463465700001016
Obtaining the q vector value of the 4 x 1 dimensional vector when the category q corresponds to the clustering distribution diagram at the full connection layer, and obtaining the middle classification prediction model when the category 1 roofing material corresponds to the category q in the clustering distribution diagram
Figure BDA00033463465700001017
S4-4 changing the category of the 1 st roofing material in the cluster map corresponding to the classification and repeating the step S4-3 to obtain confidence levels of the 4 categories of roofing material through all the roofing categories
Figure BDA0003346346570000111
And corresponding4 intermediate classification prediction models
Figure BDA0003346346570000112
Selecting a confidence maximum
Figure BDA0003346346570000113
The corresponding intermediate classification prediction model is used as the 1 st roofing material final classification model S1f
S4-5 selection of U for another roofing material2Building roof image pic1(2)Repeating the steps S4-3-S4-4, traversing all 4 kinds of roofing materials to obtain a final roofing material classification model Sf={S1f,S2f,S3f,S4f}; building a classification model of the roof material and a texture map P of the building roof materialstrMapping relationship between
Figure 574169DEST_PATH_58833DEST_PATH_IMAGE081
Obtain the chartlet model S as
Figure 602562DEST_PATH_891398DEST_PATH_IMAGE083
Verifying by using a building outline in the remote sensing image map after the centralized registration of verification and a plurality of building roof images pic2 inside the building outline;
s4-5' establishing roof material infrared classification model S between characteristic wave band normalized intensity and roof type in infrared remote sensing spectra of different roof types made of different materialsremThe method specifically comprises the following steps:
carrying out principal component analysis clustering analysis on roofs of different materials corresponding to FOIs of a plurality of buildings in the registered remote sensing infrared image in the training set to obtain a load image, obtaining characteristic wave bands sensitive to the roofs of the different materials, and establishing a roof material infrared classification model S based on the artificial neural network by using the normalized strength of the characteristic wave bands as the input end of the neural networkremAnd verified with the verification set. Table 2 shows the material identification results of 8 numbered building materials in FIG. 3, with an accuracy of over 87%.
TABLE 2SfAccuracy verificationResults
Figure BDA0003346346570000116
S4-6 testing centralization obtaining at least 1 registered remote sensing image picture, obtaining a plurality of building roof images pic3 including building outline and interior thereof to be classified, substituting pic3 into SfObtaining a model S corresponding to the maximum value in Q confidence coefficientsmaxfThe corresponding material is used as the predicted material classification s corresponding to the roof in the input pic3fThen, acquiring the normalized intensity of the characteristic wave band in the infrared remote sensing image corresponding to pic3 and inputting the normalized intensity into the infrared classification model S of the roof materialremTo obtain a classification s of the roofing materialremfObtaining sf=sremfThen use the corresponding chartlet model Fmaxf(Smaxf) Obtaining a texture map P of the roof material of the buildingmaxstrFinding out corresponding texture pattern in the mapping library, filling the texture of the texture pattern into the outline of corresponding building in the aeronautical LIDAR point cloud picture correspondingly registered by the at least 1 registered remote sensing image picture acquired in the test set to form a mapping layer PmaxstrL completes the mapping (as shown in fig. 7). Therefore, the method completes the extraction of the outline of the artificial neural network building and the texture mapping of the LIDAR point cloud map based on the sliding edge detection, and the artificial neural network building outline and the texture mapping are divided into a point cloud map layer LD and a outline layer PfL, a remote sensing infrared layer IRL, a remote sensing image layer RL and a pasting layer PL.

Claims (13)

1. The method for extracting the contour of the artificial neural network building and mapping the texture based on the sliding edge detection is characterized by comprising the following steps of:
s1, obtaining a plurality of remote sensing images of at least one city and corresponding aviation LIDAR point cloud images, determining a plurality of unified geographical coordinate systems among the plurality of remote sensing images and the corresponding aviation LIDAR point cloud images, and respectively dividing the plurality of remote sensing images and the corresponding aviation LIDAR point cloud images into a training set, a verification set and a test set;
s2. determining to have a length L and a width WInitial slip rectangular frame F0Using an initial sliding rectangular frame F to perform sliding scanning on the area of each remote sensing image in a training set, determining an interested contour FOI in the sliding scanning process, and establishing a building contour extraction model M by using a training set and a verification set to obtain a final sliding frame Ff
S3, obtaining the remote sensing image of all the building outlines to be extracted in the training set, the verification set and the test set, and utilizing the final sliding frame FfObtaining the predicted building outline P by the building outline extraction model MfRegistering the remote sensing image of the extracted building outline and the corresponding aviation LIDAR point cloud image according to a coordinate system, and predicting the building outline PfMapping into a corresponding aerial LIDAR point cloud map;
s4, obtaining a building outline in the remote sensing image after the centralized registration of the training and a plurality of building roof images pic1 inside the remote sensing image, establishing a building chartlet model S by utilizing a building roof texture map, inputting the building outline in at least 1 remote sensing image after the centralized registration of the testing and at least 1 building roof image pic3 inside the remote sensing image into the S to obtain corresponding building roof texture classification, finding out corresponding texture patterns in a chartlet library, and filling pattern textures into the building outline in the aviation LIDAR point cloud map after the centralized registration of the testing and the inside of the building outline to finish chartlet.
2. The method according to claim 1, wherein S1 specifically comprises:
s1-1, acquiring a plurality of remote sensing images of 1-660 cities and corresponding aerial LIDAR point cloud images, wherein the corresponding aerial LIDAR point cloud images are acquired at the same time as the plurality of remote sensing images, and acquiring at least 1 remote sensing image with a complete preset standard building and the corresponding aerial LIDAR point cloud image, and the number of the plurality of remote sensing images is 540-;
s1-2, establishing a unified urban geographic coordinate system E in 1 remote sensing image map with complete preset standard buildings and corresponding aviation LIDAR point cloud maps, and all the rest multiple remote sensing imagesMultiple image groups of each group of two images consisting of the image and the corresponding multiple aviation LIDAR point cloud images, and a unified coordinate system E with the same origin in 1 remote sensing image in each image group and the corresponding 1 aviation LIDAR point cloud image determined according to the positive north direction of geography(i)And i is 1,2, …, N, N is the magnitude of the remote sensing image or the corresponding aviation LIDAR point cloud picture, and the proportion of the plurality of remote sensing image and the corresponding aviation LIDAR point cloud picture which establish the coordinate system divided into a training set, a verification set and a test set is 100-50: 10-5:3-1.
3. The method according to claim 2, wherein the predetermined building standard has a rectangular rooftop frame, and wherein a vertical projection of one vertex on the ground is selected as the origin of coordinates O, and wherein a vertical projection of one side on the ground is selected as the X-axis and a vertical projection of the other side on the ground is selected as the Y-axis, thereby forming a ground plane rectangular coordinate system E, or wherein the X-axis points to the east and the Y-axis is directly north, thereby forming a ground plane rectangular coordinate system E,
determining a unified coordinate system E with the same origin in all the plurality of remote sensing images and each of the corresponding aerial LIDAR point clouds according to the positive north direction of the geography(i)And i is 1,2, …, N, specifically including determining an included angle α between an X axis of a coordinate system E and a true north direction, placing an origin of the coordinate system E on a same preset point under E in 1 remote sensing image and 1 corresponding aviation LIDAR point cloud image in each image group, and adjusting an X-axis direction to make an included angle α between the X-axis direction and the true north direction, thereby obtaining a plurality of coordinate systems E with uniform X-axis directions in one remote sensing image and 1 corresponding aviation LIDAR point cloud image in each image group(i)And i is 1,2, …, N, when the X axis of E points to true east and the Y axis is true north, adjusting the X axis to be true east or adjusting the Y axis to be true north.
4. The method according to claim 2, wherein S2 specifically comprises:
s2-1 determining an initial slip having a length L and a width WMove rectangle frame F0The value range R of L and W belongs to [ nr,2nr ] n]Wherein r is the resolution of the remote sensing image map, and n is the [10,20 ]];
S2-2, obtaining RGB tristimulus values of pixels, close to outlines, in a preset number of roofs in each remote sensing image map in a training set, determining a preset time interval T, and using an initial sliding rectangular frame F0In each remote sensing image in the training set, all areas of each remote sensing image are subjected to slippage scanning from zero time, and in the slippage scanning process, an initial slippage rectangular frame F is determined every T time0The interesting outline FOI is formed, and a feature diagram PFOI of the interesting outline FOI containing the building outline is obtained after the scanning is finished;
s2-3, inputting PFOI into RoiAlign layer, obtaining current prediction box through full connection layer, calculating contour error loss by using current prediction box and artificial mark building contour, and calculating rectangular contour
Figure FDA0003346346560000021
Wherein P, Q are predicted contours and artificially labeled contours, P, respectivelyi-QiRepresenting the error between the predicted contour and the artificially marked contour, x, y being one angular vertex coordinate of the artificially marked contour, w1 being the width of the contour, h being the height of the contour, i representing the ith building FOI contour,
Figure FDA0003346346560000022
it is shown that all the rectangular building FOI contours in the feature map PFOI are summed,
for a circular profile
Figure FDA0003346346560000023
Wherein d isPiAnd
Figure FDA0003346346560000024
diameters, x, of predicted and artificially marked circular outlines of the ith building, respectivelyr,yrAnd d are respectively an artificially marked circleThe coordinates of the center of the circle and the diameter of the profile,
Figure FDA0003346346560000025
the centre distance of the predicted circular outline and the artificially marked circular outline for the ith building,
Figure FDA0003346346560000026
it is shown that all circular building FOI contours in all pairs of characteristic maps PFOI are summed,
for error loss of other special-shaped contours, the minimum circumscribed rectangle is adopted and calculated by using a formula (1)
Figure FDA0003346346560000027
(3) Wherein P 'and Q' are respectively a predicted contour minimum bounding rectangle and an artificially marked contour minimum bounding rectangle, Pi'-Qi' represents an error between the predicted contour minimum bounding rectangle and the artificially marked contour minimum bounding rectangle, x ', y ' are coordinates of one corner vertex of the artificially marked contour minimum bounding rectangle, w1' is a width of the minimum bounding rectangle, h ' is a height of the minimum bounding rectangle,
Figure FDA0003346346560000028
it is shown that all other odd-shaped building FOI contours in the feature map PFOI are summed,
wherein the minimum bounding rectangle is determined by predicting the building vertex by utilizing a multi-layer RNN algorithm of RoiAlign layer + full connection layer FC and convolution long-short term memory ConvLSTM,
adjusting network parameters by using error loss back propagation, performing frame regression to correct the prediction result, and obtaining the change rate of the loss value
Figure FDA0003346346560000029
Wherein L is(j+1)And L(j)The error loss of the j +1 th training and the j th training respectively tends to be within the preset threshold thres rangeConsidering that the intermediate prediction profile P has been obtainedmAnd an intermediate model Mm
S2-4 adjusting initial sliding rectangular frame F0Length L and width W of the frame to form a middle sliding rectangular frame FmRepeating the steps S2-2-S2-3 to obtain a plurality of intermediate prediction profiles Pm (1),Pm (2),…,Pm (K)And a corresponding plurality of intermediate models Mm (1),Mm (2),…,Mm (K)Where K is the number of repetitions, preferably the adjustment is to increase and/or decrease L and W in steps adjusted by nr, where r is the resolution of the telemetric image map, n ∈ [10,20 ]],K∈[1,10]Obtaining the middle sliding rectangular frame and the model corresponding to the minimum error of the K +1 predicted profiles and the artificial profiles, or removing the L and W of the middle sliding rectangular frame corresponding to the residual predicted profiles with the maximum and minimum errors when the K is more than or equal to 3, and averaging the network parameters to be used as the final sliding frame F when the training is finishedfAnd a model M for verifying the error between the predicted contour and the manual mark box by using the verification set.
5. Method according to claim 4, wherein the initial slipping rectangular frame F is determined every T moments0The forming of the profile of interest FOI specifically comprises:
s2-2-1, obtaining RGB tristimulus values classification clustering maps of different roof materials through a clustering algorithm, obtaining RGB tristimulus values in an initial sliding rectangular frame F at a time interval of T from zero, setting RGB of the tristimulus values not in the RGB tristimulus values clustering of the urban roof as a first gray value or color value, setting the rest part as a second gray value or color value of another different value, or setting RGB of the tristimulus values in the RGB tristimulus clustering of the urban roof and RGB of the distance with each point in the clustering, although the RGB tristimulus values are outside the clustering, within a threshold range, as the first gray value or color value, setting the rest part as the second gray value or color value of another different value, so as to form a binary map, wherein the threshold range is 0-25, or,
determining the RGB distribution range of the roof according to the obtained RGB of the roof, acquiring RGB three values in an initial sliding rectangular frame F at intervals of T from zero time, setting a first gray value or a first color value when at least one of the RGB three values is not in the RGB distribution range of the roof and the difference value with the value in the range is minimum and exceeds a preset threshold value, and setting the rest part of the RGB three values as a second gray value or a second color value with another different value, wherein the threshold range is 25-254;
s2-2-2, identifying the contour in the binary image by using edge detection;
s2-2-3, after obtaining the binary image or after scanning the remote sensing image and completely binarizing the remote sensing image, performing binarization removal processing to recover the remote sensing image part and retain the identified outline.
6. The method of claim 5, wherein the predetermined number is 4-24 for each building, the predetermined number is 3 for each contour edge of each building, the T is 0.1-1s, and the shuffle scan is a machine scan that is a rectangular frame F from an initial shuffle0Or a middle sliding rectangular frame FmAfter the right-angle sides are aligned with two right-angle sides at any corner of the remote sensing image, starting line or column sliding scanning, and then translating the remote sensing image in the width or length direction to form an initial sliding rectangular frame F0Or a middle sliding rectangular frame FmWidth, continuing to slide the initial rectangular frame F0Or a middle sliding rectangular frame FmAfter the width or length direction edge is aligned with the width or length direction edge of the remote sensing image picture, new line scanning is carried out until the area scanning is finished, wherein the line sliding scanning is constant-speed scanning, and an initial sliding rectangular frame F0Or a middle sliding rectangular frame FmExactly one initial slipping rectangular frame F slips over the preset time interval T in the line scanning direction0Or a middle sliding rectangular frame FmThe length or width of (a), or,
determining an initial slip rectangle F at every T time0Or a middle sliding rectangular frame FmAfter the profile FOI, the profile FOI is scannedStarting sliding rectangular frame F0Or a middle sliding rectangular frame FmAnd setting the area as a scanned area, filling the scanned area with gray scale or color to prompt, and deleting the filled gray scale or color after all the scanned areas are finished.
7. The method according to claim 6, wherein the edge detection comprises canny edge detection or Sobel edge detection, or determining the contour by point-by-point scanning, and specifically comprises:
and scanning each pixel in the binary image line by line, and setting the pixel as a contour point when encountering a RGB three-value mutation pixel, thereby traversing all the pixels in the initial sliding rectangular frame F to complete the determination of the building contour at the time T, setting the determined contour point as a third gray or color value, and setting the preset threshold value thres within the range of thres less than 2-5%.
8. The method of claim 2,
s3 specifically includes:
s3-1, obtaining the remote sensing image of all the building outlines to be extracted in the training set, the verification set and the test set, and utilizing the final sliding frame FfPerforming slippage scanning on the remote sensing image maps of all the building outlines to be extracted to obtain a predicted outline P in a PFOI input model Mf
S3-2 having predicted building profile P in training set, validation set and test setfRegistering the remote sensing image with corresponding aviation LIDAR point cloud image according to respective coordinate system, and predicting the building outline PfPredicted building profile P of remote sensing image mapfMapping into corresponding aerial LIDAR point cloud picture to form profile layer PfL, wherein the registering refers to the training set, the verification set and the test set with the predicted building outline PfThe remote sensing image and the corresponding aviation LIDAR point cloud image are registered according to the superposition of the origin point of the respective coordinate system and the X axis or the Y axis, or the Z axis is superposed to enable the X axis or the Y axis to be parallel to each other.
9. The method according to any one of claims 2, 7 and 8, wherein S4 specifically comprises:
s4-1, acquiring a building contour in a remote sensing image after registration in a training set and a plurality of building roof images pic1 inside the building contour, numbering the building contour and the building roof images pic1, acquiring average RGB three values of a plurality of points inside the image, preferably selecting weighted average calculation, wherein the weight value is smaller at the position closer to a predicted contour, the distance from the internal point to an optional contour edge is divided into lp lengths, and the weight value is changed from 50% of the internal point to the contour edge in a linear decreasing mode;
s4-2, establishing an RGB cluster distribution map of building roof materials corresponding to the building roof images pic1, wherein the building roof materials comprise concrete, bricks, tiles, high polymer materials, solar panel materials and asphalt;
s4-3 centralizes training in UjArchitectural roof image pic1 of the same roofing material(j)And corresponding RGB Tri-value input for building roofing Material to RoiAlign layer pic1(j)Obtaining a prediction classification s through a full connection layer and a softmax function, and calculating the j-th roof material error loss according to the confidence coefficient of the classification q corresponding to the prediction classification s and the input RGB three values in the clustering distribution diagram
Figure FDA0003346346560000031
Wherein p isjA training set representing a jth roofing material,
Figure FDA0003346346560000041
k-numbered image of the roof of a building representing the softmax function
Figure FDA0003346346560000042
With respect to the confidence level of the class q,
Figure FDA0003346346560000043
as a roof image of a building numbered k
Figure FDA0003346346560000044
Obtaining the Q-th vector value of the Q x 1-dimensional vector corresponding to the classification Q in the clustering distribution diagram at the full-connection layer, wherein Q is the total number of roof materials, and Q is more than or equal to 2,
Figure FDA0003346346560000045
and
Figure FDA0003346346560000046
respectively representing the summation of the total number Q of the roof materials and all numbers in a training set of j-th roof materials, wherein log () represents the logarithm taking e as the base, network parameters are adjusted by utilizing error loss back propagation, classification regression is carried out to correct the prediction result, and when the change rate of the loss value is high
Figure FDA0003346346560000047
Wherein L is(l+1)And L(l)When the error losses of the (l + 1) th training and the (l) th training respectively tend to be within a preset threshold value range thres < 2-5%, the confidence coefficient of the prediction classification of the j-th roof material corresponding to the classification q in the clustering distribution diagram is considered to be obtained
Figure FDA0003346346560000048
Wherein
Figure FDA0003346346560000049
The roof image of the building numbered k at this time
Figure FDA00033463465600000410
Obtaining the Q x 1 dimension vector Q vector value when the Q corresponds to the classification Q in the cluster distribution diagram at the full connection layer, and obtaining the intermediate classification prediction model when the j roof material corresponds to the classification Q in the cluster distribution diagram
Figure FDA00033463465600000411
S4-4 changing the jth roofing materialRepeating step S4-3 corresponding to the classification in the cluster profile to obtain confidence levels of the classification of the Q types of roofing materials through all roofing species
Figure FDA00033463465600000412
And a corresponding plurality of intermediate classification prediction models
Figure FDA00033463465600000413
Selecting a confidence maximum
Figure FDA00033463465600000414
The corresponding intermediate classification prediction model is used as the j-th roof material final classification model Sjf
S4-5 selection of U for another roofing materialj+1Building roof image pic1(j+1)Repeating the steps S4-3-S4-4, traversing all Q kinds of roofing materials to obtain a final roofing material classification model Sf={S1f,S2f,…,SQf}; building a classification model of the roof material and a texture map P of the building roof materialstrS of the samef→PstrObtaining a mapping model S as writing
Figure FDA00033463465600000415
Verifying by using a building outline in the remote sensing image map after the centralized registration of verification and a plurality of building roof images pic2 inside the building outline;
s4-6 testing centralization obtaining at least 1 registered remote sensing image picture, obtaining a plurality of building roof images pic3 including building outline and interior thereof to be classified, substituting pic3 into SfObtaining a model S corresponding to the maximum value in Q confidence coefficientsmaxfThe corresponding material is used as the predicted material corresponding to the roof in the input pic3, and the corresponding chartlet model F is usedmaxf(Smaxf) Obtaining a texture map P of the roof material of the buildingmaxstrFinding out the corresponding texture pattern in the chartlet library, and filling the texture pattern into the test set to obtain at least 1 registered remote sensingForming a paste layer P in the outline of the corresponding building in the aviation LIDAR point cloud picture with the corresponding registration of the imagemaxstrAnd L, completing mapping.
10. The method of claim 9,
s1-1, obtaining multiple corresponding infrared remote sensing graphs of 1-660 cities at the same time, adding corresponding infrared remote sensing graphs into each graph group, and respectively establishing the unified city geographic coordinate system E and the unified coordinate system E with the same origin according to the existence of a complete preset standard building(i)I 1,2, …, N and added to the training set, validation set, and test set accordingly;
s3-2, corresponding infrared remote sensing graph is mapped with predicted building outline PfThe aerial LIDAR point cloud pictures are registered according to respective coordinate systems, and the corresponding infrared remote sensing pictures are mapped to the predicted building outline PfForming a remote sensing infrared layer IRL in the aerial LIDAR point cloud picture; the test set is also concentrated with a predicted building profile PfMapping the optical information in the remote sensing image to a corresponding aviation LIDAR point cloud image registered with the optical information to form a remote sensing image layer RL;
s4-5' is further included between S4-5 and S4-6, and a roof material infrared classification model S between the normalized intensity of the characteristic wave band in the infrared remote sensing spectra of different roof types and the roof types of different materials is establishedremThe method specifically comprises the following steps:
carrying out principal component analysis clustering analysis on roofs of different materials corresponding to FOIs of a plurality of buildings in the registered remote sensing infrared image in the training set to obtain a load image, obtaining characteristic wave bands sensitive to the roofs of the different materials, and establishing a roof material infrared classification model S based on the artificial neural network by using the normalized strength of the characteristic wave bands as the input end of the neural networkremAnd verifying by using a verification set;
at this time, pic3 was substituted into S4-6fObtaining a model S corresponding to the maximum value in Q confidence coefficientsmaxfThe corresponding material is used as the roof in the input pic3Corresponding predicted material classes sfThen, acquiring the normalized intensity of the characteristic wave band in the infrared remote sensing image corresponding to pic3 and inputting the normalized intensity into the infrared classification model S of the roof materialremTo obtain a classification s of the roofing materialremfIf s isf=sremfThen use the corresponding chartlet model Fmaxf(Smaxf) Obtaining a texture map P of the roof material of the buildingmaxstrFinding out corresponding texture pattern in the mapping library, filling the texture of the texture pattern into the outline of corresponding building in the aeronautical LIDAR point cloud picture correspondingly registered by the at least 1 registered remote sensing image picture acquired in the test set to form a mapping layer PmaxstrL completes the mapping, and on the contrary,
sorting roofing materials sremfCorresponding roofing material classification model SremfObtaining a texture map P of the roof material of the buildingremf=Fremf(Sremf) Wherein the subscript rem represents the classification corresponding to the classification model of the roofing material via the infrared classification model S of the roofing materialremFinding out corresponding texture patterns in a mapping library, filling the texture patterns of the texture patterns into the outline of a corresponding building in the aeronautical LIDAR point cloud picture correspondingly registered by at least 1 registered remote sensing image picture acquired in the test set to form a mapping layer PremfAnd L, completing mapping.
11. The method as claimed in claim 10, wherein the characteristic band spectrum in the infrared remote sensing map corresponding to pic2 is obtained by obtaining a remote sensing infrared spectrum of corresponding points inside a plurality of building outlines in pic2, and obtaining an average spectrum, and obtaining the characteristic band spectrum based on the average spectrum, wherein the average spectrum is obtained by performing weighted average on the integral intensity of the spectral peak of each band according to the arithmetic average of the integral intensity of the spectral peak of each band or according to the weight of each band.
12. An artificial neural network building contour extraction and texture mapping system based on sliding edge detection is characterized by comprising an aerial remote sensing system, a ground server and a client, wherein the aerial remote sensing system and the ground server and the client can be in data communication, wherein,
the aerial remote sensing system comprises a satellite positioning device, a satellite remote sensing image shooting device, an aerial LIDAR point cloud system and a remote sensing infrared shooting device, which are used for synchronously shooting a ground visible light band remote sensing image, obtaining a remote sensing image picture, acquiring aerial LIDAR point cloud data, acquiring a remote sensing infrared picture and acquiring an infrared spectrum,
the ground server processes the remote sensing image map, the aerial LIDAR point cloud data acquisition and the remote sensing infrared spectrum to complete building contour extraction and texture mapping based on the sliding edge detection artificial neural network building contour extraction and texture mapping method, and sends building contour extraction and texture mapping results to a user according to the request of the user,
the client can display the received result.
13. A non-transitory storage medium having stored therein a computer readable program executable by a ground server to implement the sliding edge detection based artificial neural network building contour extraction and texture mapping method of claims 1-11.
CN202111324125.6A 2021-11-10 2021-11-10 Artificial neural network building texture mapping method and system based on sliding edge detection Active CN114241024B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111324125.6A CN114241024B (en) 2021-11-10 2021-11-10 Artificial neural network building texture mapping method and system based on sliding edge detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111324125.6A CN114241024B (en) 2021-11-10 2021-11-10 Artificial neural network building texture mapping method and system based on sliding edge detection

Publications (2)

Publication Number Publication Date
CN114241024A true CN114241024A (en) 2022-03-25
CN114241024B CN114241024B (en) 2022-10-21

Family

ID=80748902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111324125.6A Active CN114241024B (en) 2021-11-10 2021-11-10 Artificial neural network building texture mapping method and system based on sliding edge detection

Country Status (1)

Country Link
CN (1) CN114241024B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820976A (en) * 2022-04-24 2022-07-29 奥格科技股份有限公司 Rural building modeling method, system and storage medium integrating remote sensing image and shot image
CN115578643A (en) * 2022-12-06 2023-01-06 东莞先知大数据有限公司 Farmland area building detection method, electronic device and storage medium
CN115937439A (en) * 2023-03-02 2023-04-07 航天宏图信息技术股份有限公司 Method and device for constructing three-dimensional model of urban building and electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930540A (en) * 2012-10-26 2013-02-13 中国地质大学(武汉) Method and system for detecting contour of urban building
CN102938164A (en) * 2012-12-05 2013-02-20 上海创图网络科技发展有限公司 Rapid modeling method based on aerial remote sensing photogrammetry
US20140051354A1 (en) * 2012-08-20 2014-02-20 Lg Electronics Inc. Mobile terminal, display device and method for controlling the mobile terminal
US20180063501A1 (en) * 2016-08-23 2018-03-01 Shanghai Hode Information Technology Co.,Ltd. Method and system of displaying a popping-screen
CN107844802A (en) * 2017-10-19 2018-03-27 中国电建集团成都勘测设计研究院有限公司 Water and soil conservation value method based on unmanned plane low-altitude remote sensing and object oriented classification
CN109063680A (en) * 2018-08-27 2018-12-21 湖南城市学院 Urban planning dynamic monitoring system and method based on high score remote sensing and unmanned plane
CN109084690A (en) * 2018-10-31 2018-12-25 杨凌禾讯遥感科技有限公司 Crop plant height calculation method based on unmanned plane visual remote sensing
CN109684929A (en) * 2018-11-23 2019-04-26 中国电建集团成都勘测设计研究院有限公司 Terrestrial plant ECOLOGICAL ENVIRONMENTAL MONITORING method based on multi-sources RS data fusion
CN110046572A (en) * 2019-04-15 2019-07-23 重庆邮电大学 A kind of identification of landmark object and detection method based on deep learning
CN113139453A (en) * 2021-04-19 2021-07-20 中国地质大学(武汉) Orthoimage high-rise building base vector extraction method based on deep learning

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140051354A1 (en) * 2012-08-20 2014-02-20 Lg Electronics Inc. Mobile terminal, display device and method for controlling the mobile terminal
CN102930540A (en) * 2012-10-26 2013-02-13 中国地质大学(武汉) Method and system for detecting contour of urban building
CN102938164A (en) * 2012-12-05 2013-02-20 上海创图网络科技发展有限公司 Rapid modeling method based on aerial remote sensing photogrammetry
US20180063501A1 (en) * 2016-08-23 2018-03-01 Shanghai Hode Information Technology Co.,Ltd. Method and system of displaying a popping-screen
CN107844802A (en) * 2017-10-19 2018-03-27 中国电建集团成都勘测设计研究院有限公司 Water and soil conservation value method based on unmanned plane low-altitude remote sensing and object oriented classification
CN109063680A (en) * 2018-08-27 2018-12-21 湖南城市学院 Urban planning dynamic monitoring system and method based on high score remote sensing and unmanned plane
CN109084690A (en) * 2018-10-31 2018-12-25 杨凌禾讯遥感科技有限公司 Crop plant height calculation method based on unmanned plane visual remote sensing
CN109684929A (en) * 2018-11-23 2019-04-26 中国电建集团成都勘测设计研究院有限公司 Terrestrial plant ECOLOGICAL ENVIRONMENTAL MONITORING method based on multi-sources RS data fusion
CN110046572A (en) * 2019-04-15 2019-07-23 重庆邮电大学 A kind of identification of landmark object and detection method based on deep learning
CN113139453A (en) * 2021-04-19 2021-07-20 中国地质大学(武汉) Orthoimage high-rise building base vector extraction method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄海峰 等: ""利用Google SketchUp快速构建滑坡三维模型", 《地球与环境》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820976A (en) * 2022-04-24 2022-07-29 奥格科技股份有限公司 Rural building modeling method, system and storage medium integrating remote sensing image and shot image
CN115578643A (en) * 2022-12-06 2023-01-06 东莞先知大数据有限公司 Farmland area building detection method, electronic device and storage medium
CN115578643B (en) * 2022-12-06 2023-02-17 东莞先知大数据有限公司 Farmland regional building detection method, electronic equipment and storage medium
CN115937439A (en) * 2023-03-02 2023-04-07 航天宏图信息技术股份有限公司 Method and device for constructing three-dimensional model of urban building and electronic equipment

Also Published As

Publication number Publication date
CN114241024B (en) 2022-10-21

Similar Documents

Publication Publication Date Title
CN114241024B (en) Artificial neural network building texture mapping method and system based on sliding edge detection
CN113920266B (en) Artificial intelligence generation method and system for semantic information of city information model
US6654690B2 (en) Automated method for making a topographical model and related system
US7515153B2 (en) Map generation device, map delivery method, and map generation program
CN112785643A (en) Indoor wall corner two-dimensional semantic map construction method based on robot platform
CN109598794B (en) Construction method of three-dimensional GIS dynamic model
CN109615611A (en) A kind of insulator self-destruction defect inspection method based on inspection image
CN107610164B (en) High-resolution four-number image registration method based on multi-feature mixing
CN108535321A (en) A kind of building thermal technique method for testing performance based on three-dimensional infrared thermal imaging technique
CN109919951B (en) Semantic-associated object-oriented urban impervious surface remote sensing extraction method and system
Lin et al. Applications of computer vision on tile alignment inspection
CN113642463B (en) Heaven and earth multi-view alignment method for video monitoring and remote sensing images
CN110852164A (en) YOLOv 3-based method and system for automatically detecting illegal building
Vetrivel et al. Segmentation of UAV-based images incorporating 3D point cloud information
CN111241994A (en) Method for extracting remote sensing image rural highway desertification road section for deep learning
Nex et al. Automatic roof outlines reconstruction from photogrammetric DSM
CN115512247A (en) Regional building damage grade assessment method based on image multi-parameter extraction
CN114973116A (en) Method and system for detecting foreign matters embedded into airport runway at night by self-attention feature
KR20130133596A (en) Method and apparatus for measuring slope of poles
JPH11328378A (en) Method and device for updating map information
KR102039048B1 (en) Apparatus and method for generating of simulated satellite images
Watanabe et al. Detecting changes of buildings from aerial images using shadow and shading model
CN113378754A (en) Construction site bare soil monitoring method
CN114187313B (en) Artificial neural network building contour extraction method based on sliding edge detection
CN114092805A (en) Robot dog crack recognition method based on building model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 22 / F, building 683, zone 2, No. 5, Zhongguancun South Street, Haidian District, Beijing 100086

Applicant after: Terry digital technology (Beijing) Co.,Ltd.

Address before: 100089 22 / F, building 683, zone 2, 5 Zhongguancun South Street, Haidian District, Beijing

Applicant before: Terra-IT Technology (Beijing) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant