NL2032560A - Deep learning model performance evaluation method and system - Google Patents

Deep learning model performance evaluation method and system Download PDF

Info

Publication number
NL2032560A
NL2032560A NL2032560A NL2032560A NL2032560A NL 2032560 A NL2032560 A NL 2032560A NL 2032560 A NL2032560 A NL 2032560A NL 2032560 A NL2032560 A NL 2032560A NL 2032560 A NL2032560 A NL 2032560A
Authority
NL
Netherlands
Prior art keywords
predicted
connectivity
sequence
similarity
concatenation
Prior art date
Application number
NL2032560A
Other languages
Dutch (nl)
Other versions
NL2032560B1 (en
Inventor
Liu Li
Wen Xuehu
Zhou Qi
Li Weiqing
Dong Xianmin
Original Assignee
The Third Geoinformation Mapping Institute Of Mini Of Natural Resources
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Third Geoinformation Mapping Institute Of Mini Of Natural Resources filed Critical The Third Geoinformation Mapping Institute Of Mini Of Natural Resources
Publication of NL2032560A publication Critical patent/NL2032560A/en
Application granted granted Critical
Publication of NL2032560B1 publication Critical patent/NL2032560B1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/776Validation; Performance evaluation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/457Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by analysing connectivity, e.g. edge linking, connected component analysis or slices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a deep learning model performance evaluation method and system. Connectivity patches in a true label and predicted results are first 5 acquired respectively; information of the connectivity patches are then calibrated according to a spatial position relation; similarities between a connectivity patch sequence of the true label and connectivity patch sequences of the predicted results are calculated according to the calibrated information of the connectivity patches; and the deep learning model performance is evaluated according to the similarities. According 10 to the present invention; by using the connectivity patches as a granularity; a focusing spatial position; and a spatial relationship; the model performance is evaluated according to the similarity between the connectivity sequence of a labeled image and the connectivity sequences of predicted images; so that the retention of the predicted results in terms of spatial geometric characteristics can be directly reflected.

Description

DEEP LEARNING MODEL PERFORMANCE EVALUATION METHOD AND
SYSTEM
TECHNICAL FIELD
[01] The present invention relates to the field of deep learning enabling remote sensing interpretation and surveying and mapping production, in particular to a deep learning semantic segmentation model performance evaluation method and system based on the connectivity of remote sensing ground feature patches.
BACKGROUND ART
[02] An existing deep learning semantic segmentation model performance evaluation system 1s oriented to the pixel granularity. In the work of applying deep learning to surveying and mapping production and remote sensing interpretation, existing model evaluation indicators such as accuracy, precision, recall and intersection-over-union (IoU) cannot directly reflect the retention of predicted results in terms of spatial geometric characteristics, such as buildings clustered into blocks, and roads kept connected in strips, so it is difficult to objectively determine whether a pre-trained model meets the actual needs of remote sensing interpretation and surveying and mapping production.
[03] Therefore, there is an urgent need in the art for a deep learning model performance evaluation solution based on the connectivity of remote sensing ground feature patches.
SUMMARY
[04] The present invention aims to provide a deep learning model performance evaluation method and system. By using connectivity patches as a granularity, a focusing spatial position, and a spatial relationship, the performance of a model is evaluated according to similarities of a connectivity sequence of a labeled image and connectivity sequences of predicted images, so that the problem that existing model evaluation indicators cannot directly reflect the retention of predicted results in terms of spatial geometric characteristics is solved.
[05] To achieve the above-mentioned purpose, the present invention provides the following solution:
[06] a deep learning model performance evaluation method, including:
[07] acquiring a first connectivity patch and a second connectivity patch, the first connectivity patch being extracted from a true label, the second connectivity patch being extracted from predicted results corresponding to the true label, and the predicted results including a predicted result of each Epoch model;
[08] respectively extracting contour information of the first connectivity patch and the second connectivity patch and a total number of pixel points in the patches to respectively obtain a standard marked connectivity sequence A and a predicted connectivity sequence set B, the predicted connectivity sequence set B including several predicted connectivity sequences Bx, Bx representing a predicted connectivity sequence corresponding to a k™ Epoch model, and each predicted connectivity sequence
Bxincluding the contour information of the second connectivity patch corresponding to the predicted result of one Epoch model and the total number of the pixel points in the patch;
[09] performing spatial calibration on each predicted connectivity sequence Bx in the predicted connectivity sequence set B according to the standard marked connectivity sequence A to obtain a calibrated connectivity sequence set C, the calibrated connectivity sequence set C including several calibrated connectivity sequences Cx, and each calibrated connectivity sequence Cx being obtained by calibrating one predicted connectivity sequence Bx;
[10] calculating a Euclidean distance between an i™ element a; in the standard marked connectivity sequence A and a j element ¢; in the calibrated connectivity sequence Cx in the calibrated connectivity sequence set C,
[11] constructing a similarity matrix P of mn, an element P (i, j) in the similarity matrix P being distance (a, ¢;), where m is the total number of elements in the standard marked connectivity sequence A, and n is the total number of elements in the calibrated connectivity sequence Ck;
[12] by respectively taking the elements at two ends of a target diagonal in the similarity matrix P as a start point and an end point and taking the target diagonal as a main diagonal or an auxiliary diagonal, searching out a path from the start point to the end point in sequence, in the searching process, the element with the minimum element value in a forward movement direction being searched in each step;
[13] summing all the elements on the path to obtain similarities between the standard marked connectivity sequence A and the calibrated connectivity sequences Ck;
[14] traversing all the calibrated connectivity sequences Cx in the calibrated connectivity sequence set C to obtain a similarity set; and
[15] evaluating a deep learning model according to the similarity set.
[16] In some embodiments, each connectivity patch includes one or more pixels; in the same connectivity patch, the pixels have same pixel values; and when the connectivity patch has a plurality of pixels, in the connectivity patch, any pixel has an adjacent pixel in a horizontal direction or a vertical direction.
[17] In some embodiments, the acquiring a first connectivity patch and a second connectivity patch, the first connectivity patch being extracted from a true label, the second connectivity patch being extracted from predicted results corresponding to the true label, and the predicted results including a predicted result of each Epoch model specifically includes:
[18] using Two-Pass to perform progressive scanning and searching on the true label and the predicted result of each Epoch model from left to right and from top to bottom, so as to obtain several first connectivity patches and several second connectivity patches.
[19] In some embodiments, the respectively extracting contour information of the first connectivity patch and the second connectivity patch and a total number of pixel points in the patches to respectively obtain a standard marked connectivity sequence A and a predicted connectivity sequence set B, the predicted connectivity sequence set B including several predicted connectivity sequences Bx, Bx representing a predicted connectivity sequence corresponding to a k™ Epoch model, and each predicted connectivity sequence Bx including the contour information of the second connectivity patch corresponding to the predicted result of one Epoch model and the total number of the pixel points in the patch specifically includes:
[20] extracting contour information of the first connectivity patch and a total number of pixel points in the patch to obtain a standard marked connectivity sequence A;
[21] extracting contour information of the second connectivity patch corresponding to the predicted result of one Epoch model and a total number of pixel points in the patch to obtain one predicted connectivity sequence Bx; and
[22] traversing the second connectivity patches corresponding to the predicted results of all the Epoch models to obtain a predicted connectivity sequence set B.
[23] In some embodiments, the performing spatial calibration on each predicted connectivity sequence Bx in the predicted connectivity sequence set B according to the standard marked connectivity sequence A to obtain a calibrated connectivity sequence set C specifically includes:
[24] determining, according to a spatial intersection position relation, a spatial correspondence relation between the connectivity patch in the standard marked connectivity sequence A and the connectivity patch in each predicted connectivity sequence Bx in the predicted connectivity sequence set B;
[25] re-arranging, according to an index order of the standard marked connectivity sequence A, the connectivity patches, which have the spatial intersection position relation with the standard marked connectivity sequence A, among the predicted connectivity sequences Bx, deleting the connectivity patches having no spatial intersection position relation as redundancy, and obtaining calibrated connectivity sequences Cx; and
[26] traversing all the predicted connectivity sequences Bx to obtain a calibrated connectivity sequence set C.
[27] In some embodiments, the evaluating a deep learning model according to the similarity set specifically includes:
[28] calculating a connectivity similarity indicator according to the similarity set, a larger value of the connectivity similarity indicator indicating better connectivity retention of a predicted result of the deep learning model.
[29] In some embodiments, the calculating a connectivity similarity indicator according to the similarity set specifically includes:
[30] using an integral mapping formula to calculate, according to the similarity set, a connectivity similarity indicator, the integral mapping formula being — Ad. di _csim = Coin max ‘min where d represents a similarity; dmin represents a minimum value of all the similarities in the similarity set; dax represents a maximum value of all the similarities in the similarity set; anddk sim represents a connectivity similarity indicator value.
[31] The present invention further provides a deep learning model performance evaluation system. The system includes:
[32] a connectivity patch identification unit configured for acquiring a first connectivity patch and a second connectivity patch, the first connectivity patch being extracted from a true label, the second connectivity patch being extracted from predicted results corresponding to the true label, and the predicted results including a predicted result of each Epoch model;
[33] a connectivity sequence division unit configured for respectively extracting contour information of the first connectivity patch and the second connectivity patch and a total number of pixel points in the patches to respectively obtain a standard marked connectivity sequence A and a predicted connectivity sequence set B, the predicted connectivity sequence set B including several predicted connectivity sequences Bx, Bx representing a predicted connectivity sequence corresponding to a k™ Epoch model, and each predicted connectivity sequence Bx including the contour information of the second connectivity patch corresponding to the predicted result of one Epoch model and the total number of the pixel points in the patch;
[34] a spatial calibration unit configured for performing, according to the standard marked connectivity sequence A, spatial calibration on each predicted connectivity sequence Bg in the predicted connectivity sequence set B to obtain a calibrated connectivity sequence set C, the calibrated connectivity sequence set C including several calibrated connectivity sequences Ck, and each calibrated connectivity sequence Ck being obtained by calibrating one predicted connectivity sequence Bx;
[35] a distance calculation unit configured for calculating a Euclidean distance between an i™ element a; in the standard marked connectivity sequence A and aj" element ¢; in the calibrated connectivity sequence Ck in the calibrated connectivity sequence set C;
[36] a similarity matrix construction unit configured for constructing a similarity matrix P of mxn, an element P (J, j) in the similarity matrix P being distance (a; ¢)), where m is the total number of elements in the standard marked connectivity sequence
A, and n is the total number of elements in the calibrated connectivity sequence Cx;
[37] a path searching unit configured for, by respectively taking the elements at two ends of a target diagonal in the similarity matrix P as a start point and an end point and taking the target diagonal as a main diagonal or an auxiliary diagonal, searching out a path from the start point to the end point in sequence, in the searching process, the element with the minimum element value in a forward movement direction being searched in each step;
[38] a similarity calculation unit, configured for summing all the elements on the path to obtain similarities between the standard marked connectivity sequence A and the calibrated connectivity sequences Ck;
[39] a similarity set acquisition unit configured for traversing all the calibrated connectivity sequences Ck in the calibrated connectivity sequence set C to obtain a similarity set; and
[40] an evaluation unit configured for evaluating a deep learning model according to the similarity set.
[41] In some embodiments, the evaluating a deep learning model according to the similarity set specifically includes:
[42] calculating a connectivity similarity indicator according to the similarity set, a larger value of the connectivity similarity indicator indicating better connectivity retention of a predicted result of the deep learning model.
[43] In some embodiments, the calculating a connectivity similarity indicator according to the similarity set specifically includes:
[44] using an integral mapping formula to calculate, according to the similarity set, a connectivity similarity indicator, the integral mapping formula being dd. di csi = Coin max ‘min where d represents a similarity; dmin represents a minimum value of all the similarities in the similarity set; dax represents a maximum value of all the similarities in the similarity set; and dk csim represents a connectivity similarity indicator value.
[45] According to the specific embodiments provided by the present invention, the present invention discloses the following technical effects.
[46] According to the deep learning model performance evaluation method and system provided by the present invention, the connectivity patches in the true label and the predicted results are first acquired respectively; information of these connectivity patches are then calibrated according to the spatial position relation; the similarities between the connectivity patch sequence of the true label and the connectivity patch sequences of the predicted results are calculated according to the calibrated information of the connectivity patches; and the performance of the deep learning model is evaluated according to the similarities. According to the present invention, by using the connectivity patches as a granularity, a focusing spatial position, and a spatial relationship, the model performance is evaluated according to the similarity between the connectivity sequence of a labeled image and the connectivity sequences of predicted images, so that the retention of the predicted results in terms of spatial geometric characteristics can be directly reflected.
BRIEF DESCRIPTION OF THE DRAWINGS
[47] In order to describe embodiments of the present invention or technical solutions in the existing art more clearly, drawings required to be used in the embodiments will be briefly introduced below. It is apparent that the drawings in the descriptions below are only some embodiments of the present invention. Those of ordinary skill in the art also can obtain other drawings according to these drawings without making creative work.
[48] FIG. 1 is a schematic diagram of comparison between a true label and predicted results provided by the present invention.
[49] FIG. 2 isa schematic diagram of comparison of a model performance evaluation method provided by the present invention.
[50] FIG. 3 is a flow chart of a deep learning model performance evaluation method provided by Embodiment I of the present invention. [S1] FIG. 4 is a schematic diagram of connectivity patches obtained by scanning provided by Embodiment I of the present invention.
[52] FIG. 5 is a schematic diagram of spatial calibration provided by Embodiment I of the present invention.
[53] FIG. 6 1s a schematic diagram of dynamic connectivity area regularization provided by Embodiment I of the present invention.
[54] FIG. 7 is a block diagram of a deep learning model performance evaluation system provided by Embodiment II of the present invention.
DETAILED DESCRIPTION OF THE EMBODIMENTS
[55] The following clearly and completely describes the technical solution in the embodiments of the present disclosure in combination with the accompanying drawings of the embodiments of the present disclosure. Apparently, the described embodiments are only part of the embodiments of the present disclosure, not all embodiments. Based on the embodiments in the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.
[56] The present invention aims to provide a deep learning model performance evaluation method and system. On the basis of making a full consideration of whether pixel classification is correct, by using connectivity patches as a granularity, a focusing spatial position, and a spatial relationship, the model performance is evaluated according to similarities of a connectivity sequence of a labeled image and connectivity sequences of predicted images, so that the problem that existing model evaluation indicators cannot directly reflect the retention of predicted results in terms of spatial geometric characteristics is solved.
[57] According to the present invention, according to the connectivity of remote sensing ground features, a difference between a true label and predicted results is determined on the basis of ground feature connection, so as to evaluate the model performance. Generally speaking, when a model selected by this evaluation method is used to carry out prediction, patches of predicted results will be more clustered and less trivial.
[58] Deep learning semantic segmentation is a pixel-level-oriented classification system. Evaluation indicators are built on the basis of a case analysis table that represents image prediction and classification results, namely a confusion matrix. Elements of the confusion matrix are statistical values of the prediction and classification results.
Table 1 Analysis of the confusion matrix
Real value
Confusion matrix
Positive | Negative
Predicted fe
[59] Common indicators include accuracy, precision, recall, and IoU. Accuracy indicates the accuracy of foreground and background pixel values. Precision indicates a ratio of "correctly predicted to be positive" to "actually predicted to be positive". Recall indicates a ratio of "correctly predicted as positive" to "should be predicted to be positive". IoU represents a ratio of the intersection to the union of foreground predicted results and a true value.
[60] Accuracy={TP + TN}/{TP+ FP + FN + TN).
[61] Precision=TP/{TP + FP).
[62] Recall=TP/{TP + FN).
[63] loU=TP/{TP + FP + FN}
[64] Since an existing performance evaluation system for a deep learning semantic segmentation model is oriented to the pixel granularity. In the work of applying deep learning to surveying and mapping production and remote sensing interpretation, existing model evaluation indicators such as accuracy, precision, recall and IoU cannot directly reflect the retention of predicted results in terms of spatial geometric characteristics, such as buildings clustered into blocks, and roads kept connected in strips, so it is difficult to objectively determine whether a pre-trained model meets the actual needs of remote sensing interpretation and surveying and mapping production.
Therefore, relying solely on the criteria of whether pixels are correctly marked to select a model is not better suitable for the field of applying remote sensing interpretation and surveying and mapping production.
[65] Referring to FIG. 1, FIG. 1 describes possible situations in predicted results of deep learning semantic segmentation, where FIG. 1-a represents a truth label, including a total of 8 * 8 grids, and each grid represents one pixel, so there are a total of 64 pixels.
The black grids represent pixels of remote sensing ground features, with a total of 32 pixels. A pre-trained model with the ability to identify the remote sensing ground features is obtained after training by a deep learning neural network. A new remote sensing image is predicted to possibly obtain three types of results: in FIG. 1-b, the predicted remote sensing ground feature has a complete contour; in FIG. 1-c, the remote sensing ground feature has an incomplete contour, but the spatial position of the ground feature remains relatively complete; and in FIG. 1-d, although some pixels of the remote sensing ground feature are correctly predicted, their integrity is extremely poor.
[66] After the calculation of the accuracy, precision, recall rate, and IoU, it 1s found
1t that the same evaluation indicator has the same evaluation value for the three predicted results.
Table 2 Existing evaluation indicator results
Indicators FIG. 1-b FIG. I-c FIG. 1-d
Accuracy 0.78 0.78 0.78
Precision 1 1 1
Recall 0.56 0.56 0.56
IoU 0.56 0.56 0.56
[67] From the perspective of the post-processing of the existing remote sensing interpretation, in FIG. 1-b, ideal results can be achieved after hole filling; in FIG. 1-c, the spatial position can be better retained after small patches are filtered; the results in
FIG. 1-d are of little practical value, so the order of the expected results should be: FIG. 1-b > FIG. 1-c > FIG. 1-d. In summary, under the special needs of a remote sensing application scenario, it is difficult for a conventional evaluation method to meet the requirements, and the existing evaluation indicators can neither effectively guide the evaluation of the true identification quality of the remote sensing ground features, nor guide the optimal selection of trained models.
[68] In an existing indicator system, for a prediction effect of a model, the larger number of TPs and TN in the confusion matrix indicates a more accurate classification result. On the contrary, if it is intended to achieve higher classification accuracy, the number of FPs and FNs will be smaller. A common evaluation indicator is a ratio between 0 and 1, converted from a statistical analysis result of the quantity in the confusion matrix, which is convenient for standardized measurement. Therefore, the existing indicator system only evaluates the model performance on the basis of the criterion of whether the pixel classification is correct, while in the present invention, on the basis of making a full consideration of whether pixel classification is correct, by using connectivity patches as a granularity, a focusing spatial position, and a spatial relationship, the model performance is evaluated according to similarities of a connectivity sequence of a labeled image and connectivity sequences of predicted images.
[69] For the application of intelligent remote sensing interpretation, the present invention aims to provide a deep learning semantic segmentation model performance evaluation method based on the connectivity of remote sensing ground feature patches:
A connectivity similarity (CSIM) is configured to quantitatively describe the prediction quality of a deep learning semantic segmentation model orienting a remote sensing interpretation result, so as to solve the problem that the connectivity of ground feature patches cannot be considered in the existing evaluation methods, and achieve the performance evaluation of the deep learning semantic segmentation model in terms of the remote sensing interpretation quality. As shown in FIG. 2, it is difficult for the existing evaluation indicator to reflect a performance difference as the training goes on, and the model performance cannot be effectively evaluated. In the present invention, in the training process, especially at the end of the training, of the evaluation indicator, the difference of the CSIM indicator is apparent, so that the model performance can be effectively evaluated.
[70] The main idea of the evaluation method of the present invention is as follows: in the training of the deep learning semantic segmentation model for several epochs (Epoch={1, Zekey} each training of deep learning is 1 Epoch, specifically depending on the number of trainings, k represents a serial number of a specific Epoch, and y represents the total number of Epochs). Connectivity patches in the predicted results and the true label are used as evaluation objects, and statistical characteristics of the connectivity patches are focused; and similarities between the predicted results and a true result are calculated by taking a spatial relation as an association condition, thus evaluating the remote sensing semantic segmentation performance of the model. Main calculation steps of the evaluation method are divided into five parts: identification of connectivity patches, extraction of characteristics of the connectivity patches, spatial calibration of connectivity sequences, evaluation of the similarities of the connectivity patches, and calculation of a connectivity similarity indicator. A data input includes predicted result labels of several Epoch models in a same area and a unique standard true label, and a data output is the CSIM indicators of several Epochs.
[71] In order to make the above-mentioned purposes, characteristics, and advantages of the present invention more obvious and understandable, the present invention is further described in detail below with reference to the accompanying drawings and specific implementations.
[72] Embodiment I:
[73] As shown in FIG. 3, this embodiment provides a deep learning model performance evaluation method. The method includes the following.
[74] In Sl, a first connectivity patch and a second connectivity patch are acquired, the first connectivity patch being extracted from a true label, and the second connectivity patch being extracted from predicted results corresponding to the true label. The predicted results include a predicted result of each Epoch model. Each connectivity patch includes one or more pixels; in the same connectivity patch, the pixels have same pixel values; and when the connectivity patch has a plurality of pixels, in the connectivity patch, any pixel has an adjacent pixel in a horizontal direction or a vertical direction.
[75] The method for acquiring a first connectivity patch and a second connectivity patch may be: using Two-Pass to perform progressive scanning and searching on the true label and the predicted result of each Epoch model from left to right and from top to bottom, so as to obtain several first connectivity patches and several second connectivity patches.
[76] Generally speaking, in this embodiment, in the true label and the predicted results of deep learning semantic segmentation, adjacent pixel sets having the same pixel values in the horizontal and vertical directions are defined to be connectivity patches. In the identification of the connectivity patches, Two-Pass is used to perform progressive scanning and searching on the true label and the predicted results from left to right and from top to bottom, mark all the pixel points of a same connectivity area as a same connectivity index value, and store, according to an index order, the same connectivity index values to a sequence. After the scanning is completed, a connectivity sequence marked with all the connectivity patch pixel sets is obtained. As shown in FIG. 4, three connectivity patch sequences 6, 7, and 8 will be obtained via the identification of the connectivity patches. The connectivity sequence obtained by scanning the true label is used as the first connectivity patch, and the connectivity sequences obtained by scanning the predicted results corresponding to the true label are used as the second connectivity patches.
[77] In S2, contour information of the first connectivity patch and the second connectivity patch and a total number of pixel points in the patches are respectively extracted to respectively obtain a standard marked connectivity sequence A and a predicted connectivity sequence set B, the predicted connectivity sequence set B including several predicted connectivity sequences Bx, Bx representing a predicted connectivity sequence corresponding to a k™ Epoch model, and each predicted connectivity sequence Bx including the contour information of the second connectivity patch corresponding to the predicted result of one Epoch model and the total number of the pixel points in the patch.
[78] Specifically:
[79] Contour information of the first connectivity patch and a total number of pixel points in the patch are extracted to obtain a standard marked connectivity sequence A,
A= fas, Go te} where a; represents two-tuples of the contour information of an ië connectivity patch and the total number of the pixel points, and m represents the total number of the connectivity patches in the standard marked connectivity sequence.
[80] Contour information of the second connectivity patch corresponding to the predicted result of one Epoch model and a total number of pixel points in the patch are extracted to obtain one predicted connectivity sequence By, 7 = lbs, bzebabij , where by represents two-tuples of the contour information of an 7? connectivity patch and the total number of the pixel points, and / represents the total number of the connectivity patches in the predicted connectivity sequence Bx.
[81] The second connectivity patches corresponding to the predicted results of all the
Epoch models are traversed to obtain a predicted connectivity sequence set B,
B= {1 BoB By} where Bx represents the predicted connectivity sequence of a ki Epoch model, and y represents the total number of the Epoch models. After the traversing is completed, one predicted connectivity sequence set that records characteristic information of all connectivity objects is obtained. In this way, each predicted connectivity sequence records the statistical information of the connectivity characteristic of one Epoch.
[82] In S3, spatial calibration is performed on each predicted connectivity sequence
Bk in the predicted connectivity sequence set B according to the standard marked connectivity sequence A to obtain a calibrated connectivity sequence set C, the calibrated connectivity sequence set C including several calibrated connectivity sequences Cx, and each calibrated connectivity sequence Cx being obtained by calibrating one predicted connectivity sequence Bx.
[83] Since the identification of the connectivity patches is progressive scanning, the connectivity sequences in the above step S2 are stored in an order of identifying the
IS connectivity patches. There are often differences between a standard true patch and a predicted result patch of a remote sensing ground feature. As a result, it is hard for the orders in the connectivity sequence A of the standard true label and the connectivity sequences Bx of the predicted result labels to keep an approximate correspondence relation.
[84] In order to ensure the optimal matching of the connectivity sequences A and Bx, in this embodiment, a spatial intersection position relationship is used to determine a spatial correspondence relation between each connectivity patch in the standard marked connectivity sequence A and the connectivity patch in the predicted connectivity sequence Bx; the connectivity patches, which have the spatial intersection position relation with the standard marked connectivity sequence A, among the predicted connectivity sequences Bx are re-arranged according to an index order of the standard marked connectivity sequence A; and the connectivity patches having no spatial intersection position relation are deleted as redundancy, and calibrated connectivity sequences Cx subjected to the redundancy removal and the spatial calibration,
Ci = {ex Cam Cp Cy} where c‚represents information of a j™ connectivity patch after the spatial calibration, and n represents the total number of the connectivity patches after the calibration, thus obtaining the calibrated connectivity sequence set C,
C= {Ci Colin} where Cx represents a calibrated connectivity sequence of a k™ Epoch model after the spatial calibration, and y represents the total number of the
Epoch models.
[85] A specific calculation method for the spatial calibration of the standard marked connectivity sequence A and the connectivity sequences of the predicted connectivity sequence set B is as follows:
[86] Definition: standard marked connectivity sequence A= {as Om} including m connectivity patches, predicted result label connectivity sequence
By. = {bs bz bibi} including / connectivity patches.
[87] In step S3.1: The standard marked connectivity sequence A is traversed to acquire contour characteristics of connectivity patch a; and a result is a set of boundary pixel coordinates of the connectivity patch.
[88] In step S3.2: The predicted result connectivity sequence set
B= {Bs BrBrB,} is traversed; the predicted result connectivity sequence Bx that is spatially intersected with the a; contour position to acquire a target patch sequence; the target patch sequence is added to a new connectivity sequence Cx in a new order; the connectivity patches, which have no spatial relation with the standard marked connectivity sequence A, among the predicted connectivity sequence Bx are deleted as redundancy and are no longer retained in the sequence Cx. For example, connectivity patch b2 in the sequence Bx in FIG. 5 will be deleted as a redundant patch.
[89] In step S3.3: Steps S3.1 and S3.2 are repeated until the traversal of the predicted connectivity sequence set 18 completed. The remaining connectivity patches in B are removed. The calibrated connectivity sequence set (= {a ‚ Col Cy} in which the predicted result connectivity sequences have been subjected to the spatial calibration is output.
[90] After the spatial calibration is completed, similarities of the connectivity sequences are calculated.
[91] The similarity between the standard marked connectivity sequence A and the calibrated connectivity sequence Cx after the spatial calibration is evaluated. If a conventional method similar to the Euclidean distance is used, an evaluation result cannot conform to the actual cognition due to the one-to-many relation of continuous sequences.
[92] Iftelescopic deformation of a connectivity area is allowed on a connectivity area sequence during matching, a matching effect will be greatly enhanced. Therefore, a dynamic connectivity area regularization model is designed to allow a matching mode where data has a telescopic deformation on the connectivity area. During matching of two sequences, points in the sequences are no longer in a one-to-one correspondence relation, but have different mapping relations of one-to-one, one-to-many, and many-to- one. This kind of distortion on the connectivity sequences is realized by minimizing an overall distance between sequences. Specifically, in dynamic connectivity area regularization, the correspondence relation between two connectivity sequences is obtained through dynamic planning, thus obtaining the minimum distance between the sequences, as shown in FIG, 6.
[93] For the standard marked connectivity sequence A and the calibrated connectivity sequence set C, (= {€ CpeleCy} the lengths of the sequences are respectively
A = MM and Cx P= n and a method for calculating the similarity between the connectivity sequences A and Cx includes the following steps:
[94] In S4, a Euclidean distance distance (a;c;) distance{a,c;) = (a; — %) between an i" element a in the standard marked connectivity sequence A and a /# element ¢, in the calibrated connectivity sequence Cx in the calibrated connectivity sequence set C is calculated.
[95] In S35, a similarity matrix P of mxn is constructed, an element P (i, j) in the similarity matrix P being distance (a, ¢;), where m is the total number of elements in the standard marked connectivity sequence A, and n is the total number of elements in the calibrated connectivity sequence Ck.
[96] In S6, by respectively taking the elements at two ends of a target diagonal in the similarity matrix P as a start point and an end point and taking the target diagonal as a main diagonal or an auxiliary diagonal, a path is searched out from the start point to the end point in sequence, in the searching process, the element with the minimum element value in a forward movement direction being searched in each step.
[97] For example, the upper left corner (1, 1) of the similarity matrix P is used as the search start point, and the lower right corner (m, n) is used as the end point. In the first search, (1, 1) is taken as a target point; the element with the minimum element value is searched from three elements (2, 1), (1, 2), and (2, 2); the target point is moved to the element with the minimum element value; and the search is continued by taking the element with the minimum element value as the target point. In each search, the element with the minimum element value is searched from three elements (1+1, j) on the right of the target point (1, j), (1, j+1) below the target point (1, j), and (i+1,j+1) on the lower right side of the target point (1, J), and the target point is then moved to the element with the minimum element value; the search is continued by taking the element with the minimum element value as the target point, until it reaches the end point, thereby searching out one path.
[98] In S7, all the elements on the path are summed to obtain similarities di between the standard marked connectivity sequence A and the calibrated connectivity sequences
Ck.
[99] In S8, all the calibrated connectivity sequences Cx in the calibrated connectivity sequence set C are traversed o obtain a similarity set D, D= {ds ‚ dr dy)
[100] In S9, the deep learning model is evaluated according to the similarity set.
[101] In this embodiment, specific evaluation steps include:
[102] a connectivity similarity indicator is calculated. The calculation of the connectivity similarity indicator is normalizing and integrating, on the basis of the similarity set D, numerical value results in the similarity set D under a same scale. Data in each similarity dk in the similarity set D is mapped between 0 and 1 by using a min- max normalization method. Although the mapped numerical values are greatly affected by the boundary, they have higher differentiation degree. The calculation method is as follows:
[103] using an integral mapping formula to calculate, according to the similarity set, a connectivity similarity indicator, the integral mapping formula being ty_csim = Se Gin max = min, where di represents a similarity, dmin represents a minimum value of all the similarities in the similarity set; dmax represents a maximum value of all the similarities in the similarity set; and di csim represents a connectivity similarity indicator value.
[104] A larger value of the connectivity similarity indicator indicates better connectivity retention of a predicted result of the deep learning model.
[105] By applying the deep learning semantic segmentation model performance evaluation method provided by the embodiments of the present invention, function progresses include the following:
[106] 1. For the field of remote sensing image semantic segmentation, the CSIM indicator is brought into the performance evaluation system for a model can better monitor and control model training in real time.
[107] 2. For images with different geographical characteristics, selection of models with the best performance from a pre-trained model set can be guided.
[108] 3. On the basis of pixel granularity, accurate evaluation of the true quality of predicted results of remote sensing images based on geometric characteristics of ground features 1s achieved.
[109] By applying the deep learning semantic segmentation model performance evaluation method provided in this patent, effect progresses include the following:
[110] 1. High efficiency. The composite indicators for performance evaluation can make the whole process of model training more efficient, avoid over-training and model over-fitting, and save time and cost.
[111] 2. Controllability. By the CSIM, the ability of the model to maintain the spatial geometric characteristics can be evaluated, so that when the existing indicators are not significantly differentiated, a model with better representation of the connectivity characteristics of the ground feature patches is preferably selected.
[112] 3. Adaptability. For remote sensing images with different geographical characteristics, different time phases, and different sources, on the basis of the predicted results of pre-trained models and the CSIM statistical characteristics of the true label, the optimal selection of a pre-trained model can be achieved.
[113] Economic benefits:
[114] By applying the deep learning semantic segmentation model performance evaluation method CSIM provided in the present invention, the optimal selection of the deep learning semantic segmentation model and the true evaluation of the model performance can be effectively guided in the field of remote sensing interpretation and surveying and mapping production, and the true quality of remote sensing image interpreted results is more accurately evaluated. In practical applications, the deep learning semantic segmentation model training method is greatly optimized; the production work of basic surveying and mapping projects such as global mapping and basic scale topographic map updating is effectively supported; and the production working efficiency is greatly improved.
[115] Social benefits:
[116] The deep learning semantic segmentation model performance evaluation method CSIM provided in the present invention greatly promotes the application of deep learning in the field of remote sensing interpretation and surveying and mapping production, promotes the technological fusion of "deep learning + surveying and mapping/remote sensing" from the technical level, and provides an effective technical guarantee for the intelligence of investigation and monitoring and the intelligence of surveying and mapping production in the natural resource industry.
[117] Embodiment II
[118] As shown in FIG. 7, this embodiment provides a deep learning model performance evaluation system. The system includes:
[119] a connectivity patch identification unit M1, configured for acquiring a first connectivity patch and a second connectivity patch, the first connectivity patch being extracted from a true label, the second connectivity patch being extracted from predicted results corresponding to the true label, and the predicted results including a predicted result of each Epoch model,
[120] a connectivity sequence division unit M2, configured for respectively extracting contour information of the first connectivity patch and the second connectivity patch and a total number of pixel points in the patches to respectively obtain a standard marked connectivity sequence A and a predicted connectivity sequence set B, the predicted connectivity sequence set B including several predicted connectivity sequences Bx, Bx representing a predicted connectivity sequence corresponding to a k™ Epoch model, and each predicted connectivity sequence Bx including the contour information of the second connectivity patch corresponding to the predicted result of one Epoch model and the total number of the pixel points in the patch;
[121] a spatial calibration unit M3, configured for performing, according to the standard marked connectivity sequence A, spatial calibration on each predicted connectivity sequence Bx in the predicted connectivity sequence set B to obtain a calibrated connectivity sequence set C, the calibrated connectivity sequence set C including several calibrated connectivity sequences Cx, and each calibrated connectivity sequence Ck being obtained by calibrating one predicted connectivity sequence Bx;
[122] a distance calculation unit M4, configured for calculating a Euclidean distance between an i™ element a; in the standard marked connectivity sequence A and a j% element ¢; in the calibrated connectivity sequence Ck in the calibrated connectivity sequence set C;
[123] asimilartty matrix construction unit MS, configured for constructing a similarity matrix P of mxn, an element P (7, j) in the similarity matrix P being distance (a; ¢), where m is the total number of elements in the standard marked connectivity sequence
A, and n is the total number of elements in the calibrated connectivity sequence Ck;
[124] a path searching unit M6, configured for, by respectively taking the elements at two ends of a target diagonal in the similarity matrix P as a start point and an end point and taking the target diagonal as a main diagonal or an auxiliary diagonal, searching out a path from the start point to the end point in sequence, in the searching process, the element with the minimum element value in a forward movement direction being searched in each step;
[125] a similarity calculation unitM7, configured for summing all the elements on the path to obtain similarities between the standard marked connectivity sequence A and the calibrated connectivity sequences Ck;
[126] a similarity set acquisition unit M8, configured for traversing all the calibrated connectivity sequences Cx in the calibrated connectivity sequence set C to obtain a similarity set; and
[127] an evaluation unit M9, configured for evaluating a deep learning model according to the similarity set.
[128] Specifically including:
[129] using an integral mapping formula to calculate, according to the similarity set, a connectivity similarity indicator, the integral mapping formula being dy, == Ce u i csim = od. Co Co max in where dt represents a similarity; din represents a minimum value of all the similarities in the similarity set; dmax represents a maximum value of all the similarities in the similarity set; and dx csm represents a connectivity similarity indicator value. A larger value of the connectivity similarity indicator indicates better connectivity retention of a predicted result of the deep learning model.
[130] All the embodiments in the specification are described in a progressive manner.
Contents mainly described in each embodiment are different from those described in other embodiments. Same or similar parts of all the embodiments refer to each other.
The system disclosed by the embodiments is relatively simply described as it corresponds to the method disclosed by the embodiments, and related parts refer to part of the descriptions of the method.
[131] The principle and implementation modes of the present invention are described by applying specific examples herein. The descriptions of the above embodiments are only intended to help to understand the method of the present invention and a core idea of the method. In addition, those ordinarily skilled in the art can make changes to the specific implementation modes and the application scope according to the idea of the present invention. From the above, the contents of the specification shall not be deemed as limitations to the present invention.

Claims (10)

Conclusies L Computergeimplementeerde deeplearningmodelprestatie-evaluatiewerkwijze, met het kenmerk dat de werkwijze het volgende omvat:Conclusions L Computer-implemented deep learning model performance evaluation method, characterized in that the method comprises: het verwerven van een eerste aaneengesloten stuk en een tweede aaneengesloten stuk, waarbij het eerste aaneengesloten stuk uit een waar label wordt geëxtraheerd, het tweede aaneengesloten stuk uit voorspelde resultaten die met het ware label corresponderen wordt geëxtraheerd, en de voorspelde resultaten een voorspeld resultaat van elk epochmodel omvatten;acquiring a first contig and a second contig, extracting the first contig from a true label, extracting the second contig from predicted results corresponding to the true label, and the predicted results being a predicted result from each include epoch model; het respectievelijk extraheren van contourinformatie van het eerste aaneengesloten stuk en het tweede aaneengesloten stuk en een totaal aantal pixelpunten in de stukken om respectievelijk een standaard gemarkeerde aaneensluitingsreeks A en een voorspeldeaaneensluitingsreeksverzameling B te verkrijgen, waarbij de voorspeldeaaneensluitingsreeksverzameling B verscheidene voorspelde aaneensluitingsreeksen Bk omvat, waarbij Bx een voorspelde aaneensluitingsreeks representeert die correspondeert met een k-de epochmodel, en waarbij elke voorspelde aaneensluitingsreeks Bx de contourinformatie van het tweede aaneengesloten stuk omvat die correspondeert met het voorspelde resultaat van één epochmodel en het totale aantal pixelpunten in het stuk;extracting contour information of the first contiguous piece and the second contiguous piece and a total number of pixel points in the pieces, respectively, to obtain a default marked concatenation sequence A and a predicted concatenation sequence set B, respectively, wherein the predicted concatenation sequence set B comprises several predicted concatenation sequences Bk, where Bx is a represents a predicted concatenation sequence corresponding to a kth epoch model, and wherein each predicted concatenation sequence Bx includes the contour information of the second contiguous piece corresponding to the predicted result of one epoch model and the total number of pixel points in the patch; het uitvoeren van ruimtelijke kalibratie op elke voorspelde aaneensluitingsreeks Bx in de voorspeldeaaneensluitingsreeksverzameling B volgens de standaard gemarkeerde aaneensluitingsreeks A om een gekalibreerdeaaneensluitingsreeksverzameling C te verkrijgen, waarbij de gekalibreerdeaaneensluitingsreeksverzameling ~~ C verscheidene gekalibreerde aaneensluitingsreeksen Cx omvat, en waarbij elke gekalibreerde aaneensluitingsreeks Cx verkregen wordt middels het kalibreren van één voorspelde aaneensluitingsreeks Bx;performing spatial calibration on each predicted concatenation sequence Bx in the predicted concatenation sequence set B according to the default marked concatenation sequence A to obtain a calibrated concatenation sequence set C, where the calibrated concatenation sequence set ~~ C includes several calibrated concatenation sequences Cx, and each calibrated concatenation sequence Cx is obtained by calibrating of one predicted concatenation set Bx; het berekenen van een Euclidische afstand tussen een /-de element a; in de standaard gemarkeerde aaneensluitingsreeks A en een j-de element ¢; in de gekalibreerde aaneensluitingsreeks Cx in de gekalibreerdeaaneensluitingsreeksverzameling C;calculating a Euclidean distance between a /th element a; in the default marked concatenation sequence A and a jth element ¢; in the calibrated concatenation sequence Cx in the calibrated concatenation sequence set C; het opbouwen van een gelijkenismatrix P van m = n, waarbij een element P(7, f) in de gelijkenismatrix P afstand (a; ¢;) is, waarbij m het totale aantal elementen in de standaard gemarkeerde aaneensluitingsreeks A is, en n het totale aantal elementen in de gekalibreerde aaneensluitingsreeks Cx is;building a similarity matrix P of m = n, where an element P(7, f) in the similarity matrix P is distance (a; ¢;), where m is the total number of elements in the standard marked concatenation sequence A, and n is the total number of elements in the calibrated concatenation sequence is Cx; - 05 - het zoeken, middels het respectievelijk nemen van de elementen aan twee uiteinden van een doeldiagonaal in de gelijkenismatrix P als een startpunt en een eindpunt en het nemen van de doeldiagonaal als een hoofddiagonaal of een hulpdiagonaal, van een pad van het startpunt naar het eindpunt in de reeks, waarbij in het zoekproces het element met de minimumelementwaarde in een voorwaartse bewegingsrichting gezocht wordt in elke stap; het optellen van alle elementen op het pad om gelijkenissen tussen de standaard gemarkeerde aaneensluitingsreeks A en de gekalibreerde aaneensluitingsreeksen Cx te verkrijgen; het doorlopen van alle gekalibreerde aaneensluitingsreeksen Cx in de gekalibreerdeaaneensluitingsreeksverzameling C om een gelijkenisverzameling te verkrijgen; en het evalueren van een deeplearningmodel volgens de gelijkenisverzameling.- 05 - searching, by taking the elements at two ends of a target diagonal in the similarity matrix P as a starting point and an ending point respectively and taking the target diagonal as a main diagonal or a secondary diagonal, respectively, of a path from the starting point to the end point in the sequence, the search process searching for the element with the minimum element value in a forward direction of motion in each step; summing all elements on the path to obtain similarities between the standard marked match sequence A and the calibrated match sequences Cx; iterating through all calibrated matchstrings Cx in the calibrated matchstring set C to obtain a similarity set; and evaluating a deep learning model according to the similarity set. 2. Computergeimplementeerde deeplearningmodelprestatie-evaluatiewerkwijze volgens conclusie 1, met het kenmerk dat elk aaneengesloten stuk één of meer pixels omvat; dat in eenzelfde aaneengesloten stuk, de pixels dezelfde waardes hebben; en dat indien het aaneengesloten stuk een veelheid van pixels heeft, in het aaneengesloten stuk elke pixel een aangrenzende pixel in een horizontale richting of een verticale richting heeft.A computer-implemented deep learning model performance evaluation method according to claim 1, characterized in that each contiguous piece comprises one or more pixels; that in the same contiguous piece, the pixels have the same values; and that if the contig has a plurality of pixels, then in the contig each pixel has an adjacent pixel in a horizontal direction or a vertical direction. 3. Computergeimplementeerde deeplearningmodelprestatie-evaluatiewerkwijze volgens conclusie 1, met het kenmerk dat bij het verwerven van een eerste aaneengesloten stuk en een tweede aaneengesloten stuk, waarbij het eerste aaneengesloten stuk uit een waar label wordt geëxtraheerd, het tweede aaneengesloten stuk uit voorspelde resultaten die met het ware label corresponderen wordt geëxtraheerd, en de voorspelde resultaten een voorspeld resultaat van elk epochmodel omvatten, in het bijzonder het volgende omvat: het gebruiken van twee keer doorlopen (““Two-Pass”) om progressieve scanning en zoeken uit te voeren op het ware label en het voorspelde resultaat van elk epochmodel van links naar rechts en van boven naar onder, om zo verscheidene eerste aaneengesloten stukken en verscheidene tweede aaneengesloten stukken te verkrijgen.A computer-implemented deep learning model performance evaluation method according to claim 1, characterized in that upon acquiring a first contig and a second contig, extracting the first contig from a true label, extracting the second contig from predicted results associated with the true tag is extracted, and the predicted results include a predicted result of each epoch model, specifically includes: using two-pass (“Two-Pass”) to perform progressive scanning and searching on the true label and the predicted result of each epoch model from left to right and top to bottom to obtain several first contigs and several second contigs. 4. Computergeimplementeerde deeplearningmodelprestatie-evaluatiewerkwijze volgens conclusie 3, met het kenmerk dat het respectievelijk extraheren van contourinformatie van het eerste aaneengesloten stuk en het tweede aaneengesloten stuk en een totaal aantal pixelpunten in de stukken om respectievelijk een standaard gemarkeerde aaneensluitingsreeks A en een voorspeldeaaneensluitingsreeksverzameling B te verkrijgen, waarbij de voorspeldeaaneensluitingsreeksverzameling B verscheidene voorspelde aaneensluitingsreeksen Bx omvat, waarbij Bx een voorspelde aaneensluitingsreeks representeert die correspondeert met een k-de epochmodel, en waarbij elke voorspelde aaneensluitingsreeks Bx de contourinformatie van het tweede aaneengesloten stuk omvat die correspondeert met het voorspelde resultaat van één epochmodel en het totale aantal pixelpunten in het stuk, in het bijzonder het volgende omvat: het extraheren van contourinformatie van het eerste aaneengesloten stuk en een totaal aantal pixelpunten in het stuk om een standaard gemarkeerde aaneensluitingsreeks A te verkrijgen; het extraheren van contourinformatie van het tweede aaneengesloten stuk die correspondeert met het voorspelde resultaat van één epochmodel en een totaal aantal pixelpunten in het stuk om één voorspelde aaneensluitingsreeks Bx te verkrijgen, en het doorlopen van de tweede aaneengesloten stukken die corresponderen met de voorspelde resultaten van alle epochmodellen om een voorspeldeaaneensluitingsreeksverzameling B te verkrijgen.The computer-implemented deep learning model performance evaluation method according to claim 3, characterized by extracting contour information of the first contiguous piece and the second contiguous piece and a total number of pixel points in the pieces, respectively, to obtain a default marked match set A and a predicted match set B, respectively , where the predicted concatenation sequence set B comprises several predicted concatenation sequences Bx, where Bx represents a predicted concatenation sequence corresponding to a k-th epoch model, and each predicted concatenation sequence Bx includes the contour information of the second contig that corresponds to the predicted result of one epoch model, and the total number of pixel points in the piece, comprising, in particular: extracting contour information from the first contiguous piece and a total number of pixel points in the piece to obtain a default marked concatenation sequence A; extracting contour information from the second contig that corresponds to the predicted result of one epoch model and a total number of pixel points in the chunk to obtain one predicted concatenation sequence Bx, and traversing the second contigs that correspond to the predicted results of all epoch models to obtain a predicted concatenation sequence set B. 5. Computergeimplementeerde deeplearningmodelprestatie-evaluatiewerkwijze volgens conclusie 1, met het kenmerk dat het uitvoeren van ruimtelijke kalibratie op elke voorspelde aaneensluitingsreeks Bx in de voorspeldeaaneensluitingsreeksverzameling B volgens de standaard gemarkeerde aaneensluitingsreeks A om een gekalibreerdeaaneensluitingsreeksverzameling C te verkrijgen, in het bijzonder het volgende omvat: het bepalen, volgens een ruimtelijke snij puntpositieverband, van een ruimtelijke correspondentieverband tussen het aaneengesloten stuk in de standaard gemarkeerde aaneensluitingsreeks A en het aaneengesloten stuk in elke voorspelde aaneensluitingsreeks Bx in de voorspeldeaaneensluitingsreeksverzameling B; het opnieuw rangschikken, volgens een indexvolgorde van de standaard gemarkeerde aaneensluitingsreeks A, van de aaneengesloten stukken, die het ruimtelijke snijpuntpositieverband met de standaard gemarkeerde aaneensluitingsreeks A hebben, tussen de voorspelde aaneensluitingsreeksen Bx, het verwijderen van de aaneengesloten stukken die geen ruimtelijke snijpuntpositieverband hebben als redundant, en het verkrijgen van gekalibreerde aaneensluitingsreeksen Cx; en het doorlopen van alle voorspelde aaneensluitingsreeksen Bx om een gekalibreerdeaaneensluitingsreeksverzameling C te verkrijgen.A computer-implemented deep learning model performance evaluation method according to claim 1, characterized in that performing spatial calibration on each predicted matchset Bx in the predicted matchstring set B according to the default marked matchstring set A to obtain a calibrated matchstring set C, in particular comprises: determining , according to a spatial intercept position relationship, of a spatial correspondence relationship between the contig in the default marked concatenation set A and the contig in each predicted concatenation sequence Bx in the predicted concatenation sequence set B; rearranging, according to an index order of the default marked join sequence A, the contigs having the spatial intercept position relationship with the default marked join sequence A, between the predicted join sets Bx, removing the contigs that do not have the spatial intercept position relationship as redundant , and obtaining calibrated concatenation sequences Cx; and iterating through all predicted concatenation sequences Bx to obtain a calibrated concatenation sequence set C. 6. Computergeimplementeerde deeplearningmodelprestatie-evaluatiewerkwijze volgens conclusie 1, met het kenmerk dat het evalueren van een deeplearningmodel volgens de gelijkenisverzameling in het bijzonder het volgende omvat: het berekenen van een aaneensluitingsgelijkenisindicator volgens de gelijkenisverzameling, waarbij een grotere waarde van de aaneensluitingsgelijkenisindicator betere aaneensluitingsretentie van een voorspeld resultaat van het deeplearningmodel aangeeft.A computer-implemented deep learning model performance evaluation method according to claim 1, characterized in that evaluating a deep learning model according to the similarity set comprises in particular: calculating a contiguity similarity indicator according to the similarity set, wherein a larger value of the contiguity similarity indicator has better contiguity retention of a predicted result of the deep learning model. 7. Computergeimplementeerde deeplearningmodelprestatie-evaluatiewerkwijze volgens conclusie 6, met het kenmerk dat het berekenen van een aaneensluitingsgelijkenisindicator volgens de gelijkenisverzameling in het bijzonder het volgende omvat: het gebruiken van een integraleafbeeldingsformule om volgens de gelijkenisverzameling een aaneensluitingsgelijkenisindicator te berekenen, waarbij de . . _ dk-dmin .. .. . integraleafbeeldingsformule dx csim ne waarbij dx een gelijkenis max min representeert, duin een minimumwaarde van alle gelijkenissen in de gelijkenisverzameling representeert; dmax een maximumwaarde van alle gelijkenissen in de gelijkenisverzameling representeert, en di_csim een aaneensluitingsgelijkenisindicatorwaarde representeert.A computer-implemented deep learning model performance evaluation method according to claim 6, characterized in that computing a match similarity indicator according to the similarity set comprises in particular: using an integral mapping formula to calculate a match similarity indicator according to the similarity set, where the . . _ dk-dmin .. .. . integral mapping formula dx csim ne where dx represents a similarity max min, dun represents a minimum value of all similarities in the similarity set; dmax represents a maximum value of all similarities in the similarity set, and di_csim represents a concatenation similarity indicator value. 8. Deeplearningmodelprestatie-evaluatiecomputersysteem, met het kenmerk dat het systeem het volgende omvat: een aaneengesloten-stuk identificatie-eenheid die geconfigureerd is voor het verwerven van een eerste aaneengesloten stuk en een tweede aaneengesloten stuk,A deep learning model performance evaluation computer system, characterized in that the system comprises: a contig identifier configured to acquire a first contig and a second contig, -08 - waarbij het eerste aaneengesloten stuk uit een waar label wordt geëxtraheerd, waarbij het tweede aaneengesloten stuk uit voorspelde resultaten die corresponderen met het ware label wordt geëxtraheerd, en waarbij de voorspelde resultaten een voorspeld resultaat van elk epochmodel omvatten;-08 - wherein the first contig is extracted from a true label, the second contig is extracted from predicted results corresponding to the true label, and the predicted results include a predicted result from each epoch model; een aaneensluitingsreeksverdelingseenheid die geconfigureerd is voor het respectievelijk extraheren van contourinformatie van het eerste aaneengesloten stuk en het tweede aaneengesloten stuk en een totaal aantal pixelpunten in de stukken om respectievelijk een standaard gemarkeerde aaneensluitingsreeks A en een voorspeldeaaneensluitingsreeksverzameling B te verkrijgen, waarbij de voorspeldeaaneensluitingsreeksverzameling B verscheidene voorspelde aaneensluitingsreeksen Bx omvat, waarbij Bx een voorspelde aaneensluitingsreeks representeert die correspondeert met een k-de epochmodel, en waarbij elke voorspelde aaneensluitingsreeks Bx de contourinformatie van het tweede aaneengesloten stuk omvat die correspondeert met het voorspelde resultaat van één epochmodel en het totale aantal pixelpunten in het stuk;a concatenation sequence dividing unit configured to extract contour information of the first contiguous piece and the second contiguous piece and a total number of pixel points in the pieces, respectively, to obtain a default marked concatenation sequence A and a predicted concatenation sequence set B, respectively, wherein the predicted concatenation sequence set B contains several predicted concatenation sequences includes Bx, where Bx represents a predicted concatenation sequence corresponding to a k-th epoch model, and each predicted concatenation sequence Bx includes the contour information of the second contiguous piece corresponding to the predicted result of one epoch model and the total number of pixel points in the slice; een ruimtelijkekalibratie-eenheid die geconfigureerd is voor het uitvoeren, volgens de standaard gemarkeerde aaneensluitingsreeks A, van ruimtelijke kalibratie op elke voorspelde aaneensluitingsreeks Bx in de voorspeldeaaneensluitingsreeksverzameling B om een gekalibreerdeaaneensluitingsreeksverzameling C te verkrijgen, waarbij de gekalibreerdeaaneensluitingsreeksverzameling C verscheidene gekalibreerde aaneensluitingsreeksen Cx omvat, en waarbij elke gekalibreerde aaneensluitingsreeks Cx verkregen wordt middels het kalibreren van één voorspelde aaneensluitingsreeks Bx;a spatial calibration unit configured to perform, according to the default-labeled concatenation sequence A, spatial calibration on each predicted concatenation sequence Bx in the predicted concatenation sequence set B to obtain a calibrated concatenation sequence set C, the calibrated concatenation sequence set C comprising several calibrated concatenation sequences Cx, and wherein each calibrated concatenation sequence Cx is obtained by calibrating one predicted concatenation sequence Bx; een afstandsberekeningseenheid die geconfigureerd is voor het berekenen van een Euclidische afstand tussen een i-de element a; in de standaard gemarkeerde aaneensluitingsreeks A en een j-de element ¢; in de gekalibreerde aaneensluitingsreeks Cx in de gekalibreerdeaaneensluitingsreeksverzameling C;a distance calculation unit configured to calculate a Euclidean distance between an ith element a; in the default marked concatenation sequence A and a jth element ¢; in the calibrated concatenation sequence Cx in the calibrated concatenation sequence set C; een gelijkenismatrixopbouweenheid die geconfigureerd is voor het opbouwen van een gelijkenismatrix P van m x n, waarbij een element P(á, j) in de gelijkenismatrixa similarity matrix construction unit configured to build a similarity matrix P of m x n, where an element P(á, j) in the similarity matrix P afstand (a, ¢;) is, waarbij m het totale aantal elementen in de standaard gemarkeerde aaneensluitingsreeks A is, en n het totale aantal elementen in de gekalibreerde aaneensluitingsreeks Ck is;P is distance (a, ¢;), where m is the total number of elements in the standard marked concatenation sequence A, and n is the total number of elements in the calibrated concatenation sequence Ck; een padzoekeenheid die geconfigureerd is voor het zoeken, middels het respectievelijk nemen van de elementen aan twee uiteinden van een doeldiagonaal in de gelijkenismatrix P als een startpunt en een eindpunt en het nemen van de doeldiagonaal als een hoofddiagonaal of een hulpdiagonaal, van een pad van het startpunt naar het eindpunt in de reeks, waarbij in het zoekproces het element met de minimumelementwaarde in een voorwaartse bewegingsrichting gezocht wordt in elke stap; een gelijkenisberekeningseenheid die geconfigureerd is voor het optellen van alle elementen op het pad om gelijkenissen tussen de standaard gemarkeerde aaneensluitingsreeks A en de gekalibreerde aaneensluitingsreeksen Ck te verkrijgen; een gelijkenisverzamelingsverwervingseenheid die geconfigureerd is voor het doorlopen van alle gekalibreerde aaneensluitingsreeksen Ck in de gekalibreerdeaaneensluitingsreeksverzameling C om een gelijkenisverzameling te verkrijgen; en een evaluatie-eenheid die geconfigureerd is voor het evalueren van een deeplearningmodel volgens de gelijkenisverzameling.a path finder configured to search, by taking the elements at two ends of a target diagonal in the similarity matrix P as a starting point and an end point respectively and taking the target diagonal as a main diagonal or a minor diagonal, respectively, of a path of the starting point to the ending point in the sequence, the search process searching for the element with the minimum element value in a forward motion direction in each step; a similarity calculation unit configured to add up all elements on the path to obtain similarities between the default marked match sequence A and the calibrated match sequences Ck; a similarity set acquisition unit configured to iterate through all calibrated match sequences Ck in the calibrated match set set C to obtain a similarity set; and an evaluation unit configured to evaluate a deep learning model according to the similarity set. 9. Deeplearningmodelprestatie-evaluatiecomputersysteem volgens conclusie 8, met het kenmerk dat het evalueren van een deeplearningmodel volgens de gelijkenisverzameling in het bijzonder het volgende omvat: het berekenen van een aaneensluitingsgelijkenisindicator volgens de gelijkenisverzameling, waarbij een grotere waarde van de aaneensluitingsgelijkenisindicator betere aaneensluitingsretentie van een voorspeld resultaat van het deeplearningmodel aangeeft.A deep learning model performance evaluation computer system according to claim 8, characterized in that evaluating a deep learning model according to the similarity set comprises in particular: calculating a match similarity indicator according to the similarity set, wherein a larger value of the linkage similarity indicator gives better match retention of a predicted result of the deep learning model. 10. Deeplearningmodelprestatie-evaluatiecomputersysteem volgens conclusie 9, met het kenmerk dat het berekenen van een aaneensluitingsgelijkenisindicator volgens de gelijkenisverzameling in het bijzonder het volgende omvat: Het gebruiken van een integraleafbeeldingsformule om volgens de gelijkenisverzameling een aaneensluitingsgelijkenisindicator te berekenen, waarbij de integraleafbeeldingsformule dj, csim = dk dmin_ is, waarbij di een gelijkenis 7 dmax min representeert, dmin een minimumwaarde van alle gelijkenissen in de gelijkenisverzameling representeert; dmax een maximumwaarde van alle gelijkenissen in de gelijkenisverzameling representeert, en dk_csim een aaneensluitingsgelijkenisindicatorwaarde representeert.A deep learning model performance evaluation computer system according to claim 9, characterized in that computing a reconciliation similarity indicator according to the similarity set comprises in particular: Using an integral mapping formula to calculate a reconciliation similarity indicator according to the similarity set, where the integral mapping formula is dj, csim = dk dmin_ is where di represents a similarity 7 dmax min, dmin represents a minimum value of all similarities in the similarity set; dmax represents a maximum value of all similarities in the similarity set, and dk_csim represents a concatenation similarity indicator value.
NL2032560A 2022-01-13 2022-07-21 Deep learning model performance evaluation method and system NL2032560B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210034162.1A CN114049569B (en) 2022-01-13 2022-01-13 Deep learning model performance evaluation method and system

Publications (2)

Publication Number Publication Date
NL2032560A true NL2032560A (en) 2023-07-19
NL2032560B1 NL2032560B1 (en) 2024-01-08

Family

ID=80196382

Family Applications (1)

Application Number Title Priority Date Filing Date
NL2032560A NL2032560B1 (en) 2022-01-13 2022-07-21 Deep learning model performance evaluation method and system

Country Status (2)

Country Link
CN (1) CN114049569B (en)
NL (1) NL2032560B1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114241326B (en) * 2022-02-24 2022-05-27 自然资源部第三地理信息制图院 Progressive intelligent production method and system for ground feature elements of remote sensing images

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210342585A1 (en) * 2020-05-01 2021-11-04 Caci, Inc. - Federal Systems and methods for extracting and vectorizing features of satellite imagery

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106846344B (en) * 2016-12-14 2018-12-25 国家***第二海洋研究所 A kind of image segmentation optimal identification method based on the complete degree in edge
CN109446992B (en) * 2018-10-30 2022-06-17 苏州中科天启遥感科技有限公司 Remote sensing image building extraction method and system based on deep learning, storage medium and electronic equipment
CN110232696B (en) * 2019-06-20 2024-03-08 腾讯科技(深圳)有限公司 Image region segmentation method, model training method and device
KR20220133918A (en) * 2020-01-30 2022-10-05 비타디엑스 인터내셔널 Systematic characterization of a subject within a biological sample
CN111931782B (en) * 2020-08-12 2024-03-01 中国科学院上海微***与信息技术研究所 Semantic segmentation method, system, medium and device
CN113033403A (en) * 2021-03-25 2021-06-25 生态环境部卫星环境应用中心 Image tile-based ecological protection red line ground object target identification method and system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210342585A1 (en) * 2020-05-01 2021-11-04 Caci, Inc. - Federal Systems and methods for extracting and vectorizing features of satellite imagery

Also Published As

Publication number Publication date
CN114049569A (en) 2022-02-15
CN114049569B (en) 2022-03-18
NL2032560B1 (en) 2024-01-08

Similar Documents

Publication Publication Date Title
Ukhwah et al. Asphalt pavement pothole detection using deep learning method based on YOLO neural network
CN110689081B (en) Weak supervision target classification and positioning method based on bifurcation learning
Brilakis et al. Toward automated generation of parametric BIMs based on hybrid video and laser scanning data
Zhang et al. Hierarchical topic model based object association for semantic SLAM
CN101315631B (en) News video story unit correlation method
CN103337052B (en) Automatic geometric correcting method towards wide cut remote sensing image
Hou et al. Detecting structural components of building engineering based on deep-learning method
NL2032560B1 (en) Deep learning model performance evaluation method and system
Yu et al. ArchShapesNet: a novel dataset for benchmarking architectural building information modeling element classification algorithms
CN112634447B (en) Outcrop stratum layering method, device, equipment and storage medium
CN117710403A (en) Target tracking detection method, device, equipment and storage medium
Engstrom et al. Evaluating the Relationship between Contextual Features Derived from Very High Spatial Resolution Imagery and Urban Attributes: A Case Study in Sri Lanka
US20230298335A1 (en) Computer-implemented method, data processing apparatus and computer program for object detection
CN114882490B (en) Unlimited scene license plate detection and classification method based on point-guided positioning
Yu et al. Abnormal crowdsourced data detection using remote sensing image features
CN117725662B (en) Engineering construction simulation method and system based on municipal engineering
Chen et al. Application of data fusion in the production and updating of spatial data
CN111611406B (en) Data storage system and method for artificial intelligence learning mode
Li Automatic detection of temporary objects in mobile lidar point clouds
Liu Automatic Extraction of Geometric Contour Features from Building Remote Sensing Images
Chen et al. The PFILSTM model: a crack recognition method based on pyramid features and memory mechanisms
Yilmaz et al. Evaluation of Machine Learning Algorithms for Classification of Infrastructure Elements in Complex Structures
CN117636160A (en) Automatic high-resolution remote sensing cultivated land block updating method based on semi-supervised learning
CN117688313A (en) Method for cooperatively processing mapping data by network
Dowajy et al. An Automatic Road Surface Segmentation in Non-Urban Environments: A 3D Point Cloud Approach With Grid Structure and Shallow Neural Networks