CN116580309B - Surface mine stope extraction method combining deep learning and object-oriented analysis - Google Patents

Surface mine stope extraction method combining deep learning and object-oriented analysis Download PDF

Info

Publication number
CN116580309B
CN116580309B CN202310855540.7A CN202310855540A CN116580309B CN 116580309 B CN116580309 B CN 116580309B CN 202310855540 A CN202310855540 A CN 202310855540A CN 116580309 B CN116580309 B CN 116580309B
Authority
CN
China
Prior art keywords
segmentation
remote sensing
surface mine
researched
oriented
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310855540.7A
Other languages
Chinese (zh)
Other versions
CN116580309A (en
Inventor
王润
李士垚
徐航
李彧磊
叶疆
何睿
刘帅
杨涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Geological Environment Station
Original Assignee
Hubei Geological Environment Station
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Geological Environment Station filed Critical Hubei Geological Environment Station
Priority to CN202310855540.7A priority Critical patent/CN116580309B/en
Publication of CN116580309A publication Critical patent/CN116580309A/en
Application granted granted Critical
Publication of CN116580309B publication Critical patent/CN116580309B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an extraction method of an open pit mine stope combining deep learning and object-oriented analysis, which comprises the following steps: performing preliminary identification on the remote sensing image of the area to be researched by adopting a deep learning model to obtain spatial information results of all surface mine stopes in the remote sensing image of the area to be researched, wherein the spatial information results comprise spatial positions and earth surface coverage areas; obtaining optimal segmentation parameters of an object-oriented segmentation model, and taking a segmentation result as a final object-oriented segmentation result of the remote sensing image of the region to be researched, wherein the segmentation parameters comprise segmentation scale, shape factor and compactness factor; and combining the space information extraction result of the surface mine stopes and the object-oriented segmentation result of the remote sensing image to obtain all the surface mine stopes and vector boundaries of the surface mine stopes in the remote sensing image of the region to be researched. The application realizes the fine extraction of the boundary of the surface mine stope, and the extracted surface mine stope contains more complete boundary information and has higher similarity with the actual boundary.

Description

Surface mine stope extraction method combining deep learning and object-oriented analysis
Technical Field
The application relates to the technical field of mineral resource development and supervision, in particular to an extraction method of an open pit mine stope combining deep learning and object-oriented analysis.
Background
With the development of earth observation technology, remote sensing technology has been developed as an important means for mineral resource development and monitoring, mine geological environment investigation and monitoring, ecological environment monitoring and other works. While expert scholars in various places successively conduct related research work, in related research, information acquisition of typical surface elements of mining areas mainly depends on expert visual interpretation and conventional man-machine interaction, and the technical method has insufficient automation and intelligence.
The deep learning method can learn the most representative and separable characteristics in the data set in a layering manner end to end and is widely applied to the research fields of pattern recognition and the like. Compared with the classical machine learning algorithm which needs expert experience to construct and select target features, the deep learning can learn sample features autonomously without manually constructing features or designing rules, and the degree of automation and intelligence is effectively improved.
Although deep learning has shown good performance, there are some problems with applying to wide range surface mine stope boundary extraction. On one hand, deep learning requires a large number of sample supports, and the surface mine stope only occupies a small part of the ground, so that a small number of samples can be obtained when large-scale surface mine stope extraction is carried out, and boundary extraction of the surface mine stope cannot be well supported; on the other hand, the surface mine is complex in scene, the displayed spectrum features are complex, the deep learning model can only identify partial areas in the scene, and broken areas and holes are easy to appear.
Disclosure of Invention
Aiming at the existing state of the art, the application provides an extraction method of an open-pit mine stope combining deep learning and object-oriented analysis, which realizes the fine extraction of the boundary of the open-pit mine stope, and the extracted open-pit mine stope contains more complete boundary information and has higher similarity with the actual boundary.
In order to achieve the above purpose, the application adopts the following technical scheme:
the surface mine stope extraction method combining deep learning and object-oriented analysis comprises the following steps:
s1, extracting spatial information of an opencast mine stope:
s1.1, manually translating n open pit mine stopes from remote sensing images of a region to be researched, taking an obtained manually translated object as a training sample set, and training a deep learning model based on the training sample set;
s1.2, performing preliminary identification on the remote sensing image of the area to be researched by adopting a trained deep learning model to obtain spatial information results of all surface mine stopes in the remote sensing image of the area to be researched, wherein the spatial information results comprise spatial positions and ground surface coverage areas;
s2, object-oriented segmentation of the remote sensing image:
s2.1, setting object-oriented segmentation models with different segmentation parameters, and respectively segmenting remote sensing images of a region to be studied to obtain a plurality of groups of object-oriented segmentation results, wherein the segmentation parameters comprise segmentation scales, shape factors and compactness factors;
s2.2, carrying out superposition analysis on each group of object-oriented segmentation results and the m manually-interpreted objects, screening out segmentation objects intersected with the m manually-interpreted objects, and calculating the coincidence degree of each screened segmentation object and the manually-interpreted object intersected with the screened segmentation objectAnd calculate the average contact ratioThe calculation formula is as follows:wherein, the method comprises the steps of, wherein,a function is calculated for manually interpreting the area of the object,calculating a function for the area of a segmented object intersected with the human interpretation object, wherein m is not more than n;
s2.3, selecting the contact ratioTaking the segmentation parameter under the maximum condition as the optimal segmentation parameter of the object-oriented segmentation model, and taking the segmentation result as the final object-oriented segmentation result of the remote sensing image of the region to be researched;
s3, extracting vector boundaries of the surface mine stope:
s3.1, carrying out superposition analysis on a final object-oriented segmentation result of the remote sensing image of the area to be researched and a spatial information extraction result of the surface mine stope, and reserving segmentation objects intersecting with the spatial information extraction result of the surface mine stope in the final object-oriented segmentation result of the remote sensing image of the area to be researched;
s3.2, for each reserved segmentation object, calculating a proportion value of the area of the intersection area of the segmentation object and the space information extraction result of the corresponding surface mine stope to the area of the segmentation object, and judging as follows:
if the ratio value is smaller than the first set value, removing the ratio value;
if the ratio value is between the first set value and the second set value, manually judging whether the ratio value is an open pit mine stope, if so, continuing to reserve, and if not, removing the ratio value;
if the ratio value is larger than the second set value, continuing to reserve;
and S3.3, merging adjacent objects for the continuously reserved segmented objects to obtain all surface mine stopes and vector boundaries thereof in the remote sensing image of the region to be researched.
Furthermore, n surface mine stopes forming the training sample set are uniformly distributed in the remote sensing image of the area to be researched.
Furthermore, the deep learning model adopts a lightweight network model U-Net.
Further, in the lightweight network model U-Net, the sample slice adopts a pixel size ofAnd generating a sample slice in a patch area of the training sample with 128 pixels as a step size.
Furthermore, in the lightweight network model U-Net, the training sample slices are rotated by 90 degrees, 180 degrees and 270 degrees respectively by adopting an angle rotation method, so as to amplify the training sample slices.
Further, in step S2.2, the coincidence degree between each screened segmented object and the corresponding manually interpreted object intersected with the segmented object is calculatedIn the case that the number of the segmentation results and/or the manual interpretation results is greater than 1 in a certain intersection relationship, the areas of the segmentation results and/or the manual interpretation results are combined and calculated, and then the calculation is performed.
Further, in step S3.2, for each of the remaining segmented objects, the following determination is also performed in combination with land use classification data: if the water body is intersected with the building, the cultivated land and the water body, the water body is removed, and if the water body is not intersected with the building, the cultivated land and the water body, the water body is kept.
Further, in step S3.2, the first set value is 10% and the second set value is 20%.
Further, the surface mine stopes at the k positions are manually decoded from the remote sensing image of the region to be researched, the obtained manual interpretation result is used as a verification sample set, and after the step S3.3 is completed, all the surface mine stopes and vector boundaries thereof in the remote sensing image of the region to be researched are verified through the verification sample set, wherein the verification sample set is not overlapped with the training sample set.
The beneficial effects of the application are as follows:
according to the method, a deep learning model is adopted to initially identify remote sensing images of a region to be researched, spatial information results of all surface mine stopes in the remote sensing images of the region to be researched are obtained, spatial positions and surface coverage areas of potential surface mine stopes are located, then the remote sensing images of the region to be researched are subjected to object-oriented segmentation for many times, segmented objects and manually interpreted objects are subjected to superposition analysis, the coincidence degree is evaluated based on area similarity, optimal segmentation parameters of the object-oriented segmentation model are obtained, the segmentation results are used as final object-oriented segmentation results of the remote sensing images of the region to be researched, finally, the spatial information extraction results of the surface mine stopes and the vector boundaries of the remote sensing images are combined, and fine extraction of the boundaries of the surface mine stopes is achieved after screening, and the extracted surface mine stopes contain more complete boundary information and are higher in similarity with actual boundaries. Through verification, the accuracy of the method for identifying the spatial position of the stope of the surface mine is 0.862, and the extraction accuracy of the average spatial range is 0.78.
Drawings
FIG. 1 is a flow diagram of a surface mine stope extraction method combining deep learning and object oriented analysis in accordance with the present application;
FIG. 2 is a schematic view (partial) of the object-oriented segmentation result of a remote sensing image of a region to be studied;
fig. 3 is a schematic diagram (local) of a spatial information extraction result of an open pit mine stope of a remote sensing image of a region to be studied;
fig. 4 is a schematic diagram (local) of superposition analysis of an object-oriented segmentation result of a remote sensing image of a region to be studied and a spatial information extraction result of an opencast mine stope;
fig. 5 is a schematic diagram (local) of extraction results of surface mine stopes and vector boundaries thereof in remote sensing images of an area to be studied.
Detailed Description
The application is further described below with reference to the accompanying drawings.
Referring to fig. 1, the surface mine stope extraction method combining deep learning and object-oriented analysis includes the following steps: s1, extracting spatial information of an opencast mine stope; s2, object-oriented segmentation of the remote sensing image; s3, extracting vector boundaries of the surface mine stopes.
Referring to fig. 3, the spatial information extraction of the surface mine stope of the remote sensing image of the area to be studied includes the following steps:
s1.1, manually translating n open-pit mine stopes from a remote sensing image of a region to be researched, wherein the obtained manually translated object is used as a training sample set, the n open-pit mine stopes forming the training sample set are uniformly distributed in the remote sensing image of the region to be researched, and a deep learning model is trained based on the training sample set;
s1.2, performing preliminary identification on the remote sensing image of the area to be researched by adopting the trained deep learning model to obtain spatial information results of all surface mine stopes in the remote sensing image of the area to be researched, wherein the spatial information results comprise spatial positions and surface coverage areas.
In this embodiment, the deep learning model uses a lightweight network model U-Net.
Specifically, in the lightweight network model U-Net, the sample slice adopts pixels with the size ofAnd generating a sample slice in a patch area of the training sample with 128 pixels as a step size. Furthermore, in the lightweight network model U-Net, the training sample slices are rotated by 90 degrees, 180 degrees and 270 degrees respectively by adopting an angle rotation method, so as to amplify the training sample slices.
Referring to fig. 2, the remote sensing image object-oriented segmentation of the remote sensing image of the region to be studied includes the following steps:
s2.1, setting object-oriented segmentation models with different segmentation parameters, and respectively segmenting remote sensing images of a region to be studied to obtain a plurality of groups of object-oriented segmentation results, wherein the segmentation parameters comprise segmentation scales, shape factors and compactness factors;
s2.2, carrying out superposition analysis on each group of object-oriented segmentation results and the m manually-interpreted objects, screening out segmentation objects intersected with the m manually-interpreted objects, and calculating the coincidence degree of each screened segmentation object and the manually-interpreted object intersected with the screened segmentation objectAnd calculate the average contact ratioThe calculation formula is as follows:wherein, the method comprises the steps of, wherein,a function is calculated for manually interpreting the area of the object,calculating a function for the area of a segmented object intersected with the human interpretation object, wherein m is not more than n;
s2.3, selecting the contact ratioThe segmentation parameter under the maximum condition is used as the optimal segmentation parameter of the object-oriented segmentation model, and the segmentation result is used as the final object-oriented segmentation result of the remote sensing image of the region to be researched.
In the step S2.2, when calculating the overlap ratio between each screened segmented object and the corresponding manually decoded object intersected with the segmented object, if the number of the segmented results and/or the manually decoded results is greater than 1 in a certain intersection relationship, the areas of the segmented results and/or the manually decoded results are combined and calculated, and then the calculation is performed. The average degree of overlapThe calculation formula of (2) should be adaptively adjusted.
The technical principle of performing remote sensing image object-oriented segmentation by using the optimal segmentation parameters is as follows:
the remote sensing image object-oriented segmentation is a process of dividing an image scene into a plurality of meaningful sub-areas based on homogeneity or heterogeneity criteria according to the difference of ground object targets on image characteristics. For any area to be researched, the types of mine development occupation area are more, the characteristic differences of pattern spectrum, texture, geometry and the like are obvious, and a single object capable of completely expressing the mine development occupation area outline information is not easy to directly separate. According to the application, segmentation results under different parameter combinations are generated by controlling segmentation scale, shape factor and compactness factor variables, and after superposition analysis is carried out on the segmentation results and the artificial interpretation objects, intersecting segmentation objects are screened out, and then the coincidence degree is evaluated based on area similarity. And finally selecting the segmentation scale, the shape factor and the compactness factor under the condition of highest average area similarity (optimal coincidence) as optimal segmentation parameters through multiple judgment. To avoid too much "fragmentation" of the segmentation result, the fewer segmented objects contained in the range, the better, i.e. the more complete the object, at the optimal overlap ratio.
Referring to fig. 4 to 5, the extraction of the mining field and the vector boundary thereof in the remote sensing image of the area to be studied includes the following steps:
s3.1, carrying out superposition analysis on a final object-oriented segmentation result of the remote sensing image of the area to be researched and a spatial information extraction result of the surface mine stope, and reserving segmentation objects intersecting with the spatial information extraction result of the surface mine stope in the final object-oriented segmentation result of the remote sensing image of the area to be researched;
s3.2, for each reserved segmentation object, calculating a proportion value of the area of the intersection area of the segmentation object and the space information extraction result of the corresponding surface mine stope to the area of the segmentation object, and judging as follows:
if the ratio value is smaller than the first set value, removing the ratio value;
if the ratio value is between the first set value and the second set value, manually judging whether the ratio value is an open pit mine stope, if so, continuing to reserve, and if not, removing the ratio value;
if the ratio value is larger than the second set value, continuing to reserve;
for each reserved segmentation object, the following judgment is also carried out by combining land utilization classification data: if the water body is intersected with the building, the cultivated land and the water body, the water body is removed, and if the water body is not intersected with the building, the cultivated land and the water body, the water body is kept;
and S3.3, merging adjacent objects for the continuously reserved segmented objects to obtain all surface mine stopes and vector boundaries thereof in the remote sensing image of the region to be researched.
In this embodiment, the first set value is 10% and the second set value is 20%.
And (3) manually decoding the k open-pit mine stopes from the remote sensing image of the region to be researched, wherein the obtained manual interpretation result is used as a verification sample set, and after the step S3.3 is completed, all the open-pit mine stopes and vector boundaries thereof in the remote sensing image of the region to be researched are verified through the verification sample set, wherein the verification sample set is not overlapped with the training sample set.
And (3) verifying the accuracy of the identification result:
for a test study area, the surface mine stope in the study area only accounts for about 1.4% of the total area of the study area, the method provided by the application is used for identifying the position of the surface mine stope 152, the position of the surface mine stope is correct 125, the position of the surface mine stope is wrong 27, the position of the surface mine stope is missing 13, the identification accuracy F1 is 0.862, and the extraction accuracy of the average space range is 0.78.
In general, the method comprises the steps of firstly carrying out preliminary identification on remote sensing images of a region to be researched by adopting a deep learning model to obtain spatial information results of all surface mine stopes in the remote sensing images of the region to be researched, positioning spatial positions and surface coverage areas of potential surface mine stopes, then carrying out object-oriented segmentation on the remote sensing images of the region to be researched for many times, carrying out superposition analysis on segmented objects and human interpretation objects, evaluating the coincidence degree based on area similarity to obtain optimal segmentation parameters of an object-oriented segmentation model, taking the segmentation results as final object-oriented segmentation results of the remote sensing images of the region to be researched, finally combining the spatial information extraction results of the surface mine stopes with the object-oriented segmentation results of the remote sensing images, and obtaining all surface mine stopes and vector boundaries thereof in the remote sensing images of the region to be researched after screening, thereby realizing fine extraction of the boundaries of the surface mine stopes, wherein the extracted surface mine stopes contain more complete boundary information and have higher similarity with actual boundaries.
Of course, the above embodiments are only preferred embodiments of the present application, and the scope of the present application is not limited thereto, so that all equivalent modifications made in the principles of the present application are included in the scope of the present application.

Claims (9)

1. The surface mine stope extraction method combining deep learning and object-oriented analysis is characterized in that: the method comprises the following steps:
s1, extracting spatial information of an opencast mine stope:
s1.1, manually translating n open pit mine stopes from remote sensing images of a region to be researched, taking an obtained manually translated object as a training sample set, and training a deep learning model based on the training sample set;
s1.2, performing preliminary identification on the remote sensing image of the area to be researched by adopting a trained deep learning model to obtain spatial information results of all surface mine stopes in the remote sensing image of the area to be researched, wherein the spatial information results comprise spatial positions and ground surface coverage areas;
s2, object-oriented segmentation of the remote sensing image:
s2.1, setting object-oriented segmentation models with different segmentation parameters, and respectively segmenting remote sensing images of a region to be studied to obtain a plurality of groups of object-oriented segmentation results, wherein the segmentation parameters comprise segmentation scales, shape factors and compactness factors;
s2.2, carrying out superposition analysis on each group of object-oriented segmentation results and the m manually-interpreted objects, screening out segmentation objects intersected with the m manually-interpreted objects, and calculating the coincidence degree of each screened segmentation object and the manually-interpreted object intersected with the screened segmentation objectAnd meterCalculating average overlap ratioThe calculation formula is as follows:wherein, the method comprises the steps of, wherein,a function is calculated for manually interpreting the area of the object,calculating a function for the area of a segmented object intersected with the human interpretation object, wherein m is not more than n;
s2.3, selecting the contact ratioTaking the segmentation parameter under the maximum condition as the optimal segmentation parameter of the object-oriented segmentation model, and taking the segmentation result as the final object-oriented segmentation result of the remote sensing image of the region to be researched;
s3, extracting vector boundaries of the surface mine stope:
s3.1, carrying out superposition analysis on a final object-oriented segmentation result of the remote sensing image of the area to be researched and a spatial information extraction result of the surface mine stope, and reserving segmentation objects intersecting with the spatial information extraction result of the surface mine stope in the final object-oriented segmentation result of the remote sensing image of the area to be researched;
s3.2, for each reserved segmentation object, calculating a proportion value of the area of the intersection area of the segmentation object and the space information extraction result of the corresponding surface mine stope to the area of the segmentation object, and judging as follows:
if the ratio value is smaller than the first set value, removing the ratio value;
if the ratio value is between the first set value and the second set value, manually judging whether the ratio value is an open pit mine stope, if so, continuing to reserve, and if not, removing the ratio value;
if the ratio value is larger than the second set value, continuing to reserve;
and S3.3, merging adjacent objects for the continuously reserved segmented objects to obtain all surface mine stopes and vector boundaries thereof in the remote sensing image of the region to be researched.
2. The surface mine stope extraction method combining deep learning and object oriented analysis of claim 1, wherein: the n open pit mine stopes forming the training sample set are uniformly distributed in the remote sensing image of the area to be researched.
3. The surface mine stope extraction method combining deep learning and object oriented analysis of claim 1, wherein: the deep learning model adopts a lightweight network model U-Net.
4. A surface mine stope extraction method combining deep learning and object oriented analysis as claimed in claim 3 wherein: in the lightweight network model U-Net, the pixel size adopted by the sample slice isAnd generating a sample slice in a patch area of the training sample with 128 pixels as a step size.
5. The surface mine stope extraction method combining deep learning and object oriented analysis of claim 4, wherein: in the lightweight network model U-Net, a training sample slice is rotated by 90 degrees, 180 degrees and 270 degrees respectively by adopting an angle rotation method, so as to amplify the training sample slice.
6. The surface mine stope extraction method combining deep learning and object oriented analysis of claim 1, wherein: in step S2.2, calculating the coincidence ratio of each screened segmented object and the corresponding manually interpreted object intersected with the segmented objectIn the case that the number of the segmentation results and/or the manual interpretation results is greater than 1 in a certain intersection relationship, the areas of the segmentation results and/or the manual interpretation results are combined and calculated, and then the calculation is performed.
7. The surface mine stope extraction method combining deep learning and object oriented analysis of claim 1, wherein: in step S3.2, for each of the segmented objects that remain, the following determination is also made in conjunction with land use classification data: if the water body is intersected with the building, the cultivated land and the water body, the water body is removed, and if the water body is not intersected with the building, the cultivated land and the water body, the water body is kept.
8. The surface mine stope extraction method combining deep learning and object oriented analysis of claim 1, wherein: in step S3.2, the first set value is 10% and the second set value is 20%.
9. The surface mine stope extraction method combining deep learning and object oriented analysis according to any one of claims 1 to 8, wherein: and (3) manually decoding the k open-pit mine stopes from the remote sensing image of the region to be researched, wherein the obtained manual interpretation result is used as a verification sample set, and after the step S3.3 is completed, all the open-pit mine stopes and vector boundaries thereof in the remote sensing image of the region to be researched are verified through the verification sample set, wherein the verification sample set is not overlapped with the training sample set.
CN202310855540.7A 2023-07-13 2023-07-13 Surface mine stope extraction method combining deep learning and object-oriented analysis Active CN116580309B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310855540.7A CN116580309B (en) 2023-07-13 2023-07-13 Surface mine stope extraction method combining deep learning and object-oriented analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310855540.7A CN116580309B (en) 2023-07-13 2023-07-13 Surface mine stope extraction method combining deep learning and object-oriented analysis

Publications (2)

Publication Number Publication Date
CN116580309A CN116580309A (en) 2023-08-11
CN116580309B true CN116580309B (en) 2023-09-15

Family

ID=87536326

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310855540.7A Active CN116580309B (en) 2023-07-13 2023-07-13 Surface mine stope extraction method combining deep learning and object-oriented analysis

Country Status (1)

Country Link
CN (1) CN116580309B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3614308A1 (en) * 2018-08-24 2020-02-26 Ordnance Survey Limited Joint deep learning for land cover and land use classification
WO2020040734A1 (en) * 2018-08-21 2020-02-27 Siemens Aktiengesellschaft Orientation detection in overhead line insulators
CN111723712A (en) * 2020-06-10 2020-09-29 内蒙古农业大学 Method and system for extracting mulching film information based on radar remote sensing data and object-oriented mulching film information
WO2021226977A1 (en) * 2020-05-15 2021-11-18 安徽中科智能感知产业技术研究院有限责任公司 Method and platform for dynamically monitoring typical ground features in mining on the basis of multi-source remote sensing data fusion and deep neural network
CN115327613A (en) * 2022-06-20 2022-11-11 华北科技学院 Mine micro-seismic waveform automatic classification and identification method in multilayer multistage mode

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7451065B2 (en) * 2002-03-11 2008-11-11 International Business Machines Corporation Method for constructing segmentation-based predictive models

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020040734A1 (en) * 2018-08-21 2020-02-27 Siemens Aktiengesellschaft Orientation detection in overhead line insulators
EP3614308A1 (en) * 2018-08-24 2020-02-26 Ordnance Survey Limited Joint deep learning for land cover and land use classification
WO2021226977A1 (en) * 2020-05-15 2021-11-18 安徽中科智能感知产业技术研究院有限责任公司 Method and platform for dynamically monitoring typical ground features in mining on the basis of multi-source remote sensing data fusion and deep neural network
CN111723712A (en) * 2020-06-10 2020-09-29 内蒙古农业大学 Method and system for extracting mulching film information based on radar remote sensing data and object-oriented mulching film information
CN115327613A (en) * 2022-06-20 2022-11-11 华北科技学院 Mine micro-seismic waveform automatic classification and identification method in multilayer multistage mode

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MFPA-Net: An efficient deep learning network for automatic ground fissures extraction in UAV images of the coal mining area;Xiao Jian等;International Journal of Applied Earth Observation and Geoinformation;第114卷;1-14 *
基于多尺度分割和决策树算法的山区遥感影像变化检测方法――以四川攀西地区为例;张正健;李爱农;雷光斌;边金虎;吴炳方;;生态学报(第24期);7222-7232 *
露天开采矿区要素遥感提取研究进展及展望;张仙等;自然资源遥感;第35卷(第02期);25-33 *

Also Published As

Publication number Publication date
CN116580309A (en) 2023-08-11

Similar Documents

Publication Publication Date Title
CN111986099B (en) Tillage monitoring method and system based on convolutional neural network with residual error correction fused
Caspari et al. Convolutional neural networks for archaeological site detection–Finding “princely” tombs
US9430499B2 (en) Automated feature extraction from imagery
CN110472676A (en) Stomach morning cancerous tissue image classification system based on deep neural network
CN108765408A (en) Build the method in cancer pathology image virtual case library and the multiple dimensioned cancer detection system based on convolutional neural networks
CN108256424A (en) A kind of high-resolution remote sensing image method for extracting roads based on deep learning
CN107977671A (en) A kind of tongue picture sorting technique based on multitask convolutional neural networks
CN107016665A (en) A kind of CT pulmonary nodule detection methods based on depth convolutional neural networks
CN107016403B (en) A method of completed region of the city threshold value is extracted based on nighttime light data
CN111028255A (en) Farmland area pre-screening method and device based on prior information and deep learning
CN110544293B (en) Building scene recognition method through visual cooperation of multiple unmanned aerial vehicles
CN103020649A (en) Forest type identification method based on texture information
CN113657324B (en) Urban functional area identification method based on remote sensing image ground object classification
CN112232328A (en) Remote sensing image building area extraction method and device based on convolutional neural network
CN106683102A (en) SAR image segmentation method based on ridgelet filters and convolution structure model
CN108021920A (en) A kind of method that image object collaboration is found
CN110246579A (en) A kind of pathological diagnosis method and device
CN113989681A (en) Remote sensing image change detection method and device, electronic equipment and storage medium
Dong et al. New quantitative approach for the morphological similarity analysis of urban fabrics based on a convolutional autoencoder
CN112329706A (en) Mining land identification method based on remote sensing technology
CN117036715A (en) Deformation region boundary automatic extraction method based on convolutional neural network
Ming et al. Cropland extraction based on OBIA and adaptive scale pre-estimation
CN110570405A (en) pulmonary nodule intelligent diagnosis method based on mixed features
CN116580309B (en) Surface mine stope extraction method combining deep learning and object-oriented analysis
CN108109125A (en) Information extracting method and device based on remote sensing images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant