CN108470185A - The atural object annotation equipment and method of satellite image - Google Patents

The atural object annotation equipment and method of satellite image Download PDF

Info

Publication number
CN108470185A
CN108470185A CN201810147308.7A CN201810147308A CN108470185A CN 108470185 A CN108470185 A CN 108470185A CN 201810147308 A CN201810147308 A CN 201810147308A CN 108470185 A CN108470185 A CN 108470185A
Authority
CN
China
Prior art keywords
satellite image
atural object
layer
satellite
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810147308.7A
Other languages
Chinese (zh)
Inventor
史红欣
张弓
顾竹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Standard World Co Ltd
Original Assignee
Beijing Standard World Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Standard World Co Ltd filed Critical Beijing Standard World Co Ltd
Priority to CN201810147308.7A priority Critical patent/CN108470185A/en
Publication of CN108470185A publication Critical patent/CN108470185A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides the atural object annotation equipments and method of a kind of satellite image.The atural object annotation equipment of the satellite image includes:First unit, the first unit are configured as obtaining training set, and extract the atural object mark feature of the training set, generate training pattern;Wherein, the training set includes multiple the first satellite images that atural object mark is completed;Second unit, the second unit is configured as obtaining the training pattern, and is labeled according to the atural object in the second satellite image of the training pattern pair.The atural object mask method of the satellite image includes:The atural object for extracting training set marks feature, generates training pattern;It is labeled according to the atural object in the second satellite image of the training pattern pair.Using the atural object annotation equipment and method of satellite image provided by the present invention, the automatic marking to satellite image atural object is realized, atural object mark is quick, accuracy rate is high, while requiring the clarity of satellite image relatively low.

Description

The atural object annotation equipment and method of satellite image
Technical field
The present invention relates to remote sensing technology fields, in particular to the atural object annotation equipment and method of a kind of satellite image.
Background technology
The present invention belongs to the relevant technologies related to the present invention for the description of background technology, be only used for explanation and just In the invention content for understanding the present invention, it should not be construed as applicant and be specifically identified to or estimate applicant being considered of the invention for the first time The prior art for the applying date filed an application.
Atural object (road, house, water body etc.) in satellite image is labeled be satellite image field important topic, Currently, artificial mark is mostly used greatly to the mark of atural object;The atural object in satellite image is carried out using the method manually marked Mark, for example, " 1 " is labeled as the road in a frame satellite image, house is labeled as " 2 ", water body is labeled as " 3 ".Artificial mark The accuracy of note is higher, but extremely inefficient, and the atural object mark in a large amount of satellite image is taken time and effort.
To overcome drawbacks described above, it is proposed that two kinds of automatic marking methods are labeled the atural object in satellite image.First The mark principle of kind automatic marking method is as follows:Each frame satellite image is made of pixel, each pixel can by red (R), Green (G), blue (B) three kinds of colors are characterized, i.e., each pixel all has R values, G values, B values;Not accordingly for two kinds R values, G values, the B values of same atural object, at least one group there is larger differences, utilize the R values of each atural object, G values or B values Difference sets its R value, the range of G values, B values to different atural object, the automatic marking to different atural objects may be implemented.Below with For judging some region in satellite image and being house or water body, first, the R in house and water body in satellite image is defined Value, the range of G values, B values, such as the R values in house, G values, B values are [0,50], [0,100], [0,150], the R values of water body, G values, B values are [50,150], [0,50], [100,250];Then, obtaining the R values, G values, B values of the pixel in satellite image to be marked is The no R values for belonging to house or water body, the range of G values, B values, such as the R values of some pixel, G values, B values in satellite image to be marked It is 5,10,25, the R values of the pixel, G values, B values then judge within the scope of the R values, G values, B values in house in satellite image to be marked The pixel is house.There is following problems for above-mentioned automatic marking method:Error is larger, same atural object due to area difference, The reasons such as shooting time difference, shooting angle difference, self color result in the R values, G values or B values of its own and have exceeded system R values, the value range of G values, B values of the atural object of setting, resulting in can not mark or even error label.
In second of automatic marking method, physical characterization definition is carried out to atural object first, physical characterization includes the face of atural object The shape etc. of color, atural object carries out physics to building first for example, for being labeled to the building in satellite image Characterization, defines the information such as color gamut and the form range of building, when a certain atural object occurred in satellite image meets When the physical characterization of building defines, then it can determine that the atural object is building.There is following problems for above-mentioned automatic marking method: 1, error is larger, and the physical characterization for atural object is difficult to carry out accurate definition, and same atural object is due to regional difference, shooting time The physical characterization that the reasons such as difference, shooting angle difference, self color result in the physical characterization and definition of its own is not inconsistent, Resulting in can not mark or even error label;2, to the more demanding of satellite image, due to need to the physical characterization of atural object into Row extraction, it is desirable that satellite image has higher resolution ratio;3, poor for applicability, for the satellite image of different regions, need pair Atural object carries out different physical characterization definition.
Invention content
The present invention provides a kind of atural object annotation equipment of satellite image of superior performance and methods.
The embodiment of first aspect present invention provides a kind of atural object annotation equipment of satellite image:First unit, The first unit is configured as obtaining training set, and extracts the atural object mark feature of the training set, generates training pattern;Its In, the training set includes multiple the first satellite images that atural object mark is completed;Second unit, the second unit are configured To obtain the training pattern, and it is labeled according to the atural object in the second satellite image of the training pattern pair.
Preferably, the atural object annotation equipment of the satellite image further includes third unit, and the third unit is configured as The atural object in multiple first satellite images is labeled according to instruction, forms the training set.
Preferably, the third unit includes:Module is cut out, if the module of cutting out is cut out to obtain to satellite image Dry first satellite image;Labeling module, the labeling module select multiple first satellite images in several first satellite images And atural object mark is carried out to multiple first satellite images chosen according to instruction, form the training set.
Preferably, the third unit further includes preprocessing module, and the preprocessing module carries out the satellite image Pretreatment;The module of cutting out by pretreated satellite image to being cut out.
Preferably, the third unit further includes data enhancing module, and the data enhancing module is used for the training The first satellite image concentrated carries out data enhancing.
Preferably, the atural object annotation equipment of the satellite image further includes Unit the 4th, and Unit the 4th is configured as To generate training pattern Period Process sampling, obtain current training pattern, using current training pattern to verification collect into Row atural object marks, and calculates the error amount between the annotation results and the true mark of the verification collection according to loss function;Institute It states first unit and current training pattern is corrected according to the loss function and the error amount.
Preferably, Unit the 4th includes input module, computing module and output module;The input module is used Collection and current training pattern are verified in obtaining, and verification is collected according to current training pattern and carries out atural object mark;It is described Computing module is calculated for obtaining annotation results, and using loss function between the annotation results and the true mark of verification collection Error amount;The error amount is transferred to the first unit by the output module.
Preferably, the loss function is that two-value intersects entropy function.
Preferably, the atural object annotation equipment of the satellite image further includes Unit the 5th, and Unit the 5th is configured as Atural object mark is carried out to test set using the training pattern acquired in the second unit, and according to the annotation results and the survey Deviation between the true mark of examination collection determines whether the training pattern acquired in the second unit is qualified.
Preferably, the atural object annotation equipment of the satellite image further includes Unit the 6th, and Unit the 6th is configured as The second satellite image to completing atural object mark carries out expansion process.
Preferably, the size of first satellite image and second satellite image is 512x512 pixels.
Preferably, first satellite image and second satellite image are triple channel image;Described second defends It is single channel image after star image mark.
Preferably, the first unit generates the training pattern using U-net neural networks.
Preferably, the U-net neural networks include the first convolutional layer, pond layer, warp lamination and the second convolution Layer;The U-net neural networks are configured as:It is provided with one layer of pond layer between every two layers of first convolutional layer, most One layer of warp lamination, every two layers of volume Two are provided between the second convolutional layer of the first convolutional layer of later layer and first layer One layer of warp lamination is provided between lamination;For carrying out convolution operation, the pond layer is used for first convolutional layer Pondization operation is carried out, the warp lamination is for carrying out deconvolution operation, and second convolutional layer is for carrying out convolution operation.
Preferably, the U-net neural networks carry out network training using Adam optimization algorithms.
Preferably, the U-net neural networks use size to carry out convolution operation for the convolution kernel of NxN, picture element matrix Size is MxM, then complete in last column supplement (N-1) row of picture element matrix in the picture element matrix before carrying out convolution operation Null vector arranges full null vector and forms the pixel that size is (M+N-1) x (M+N-1) in last row supplement (N-1) of picture element matrix Matrix, the U-net neural networks carry out convolution operation to the picture element matrix that size is (M+N-1) x (M+N-1).
Preferably, the U-net neural networks further include classification layer, and the classification layer is arranged in last the second convolution of layer After layer, the classification layer is used to convert the output matrix of the second convolutional layer of last layer to probability matrix.
Preferably, the classification layer is turned the output matrix of the second convolutional layer of last layer using Sigmoid activation primitives Turn to probability matrix.
The embodiment of second aspect of the present invention provides a kind of atural object mask method of satellite image, including:Extraction training The atural object of collection marks feature, generates training pattern;The training set includes multiple the first satellite images that atural object mark is completed; It is labeled according to the atural object in the second satellite image of the training pattern pair.
Preferably, the training set obtains as follows:According to instruction in multiple first satellite images Atural object is labeled.
Preferably, the training set obtains as follows:Satellite image is cut out to obtain several first satellites Image selects multiple first satellite images and according to the multiple first satellite shadows instructed to choosing in several first satellite images As carrying out atural object mark, the training set is formed.
Preferably, the atural object mask method of the satellite image further includes:It is defended to described before satellite image is cut out Star image is pre-processed.
Preferably, the atural object mask method of the satellite image further includes:To the first satellite image in the training set Carry out data enhancing.
Preferably, the atural object mask method of the satellite image further includes:To generating the Period Process sampling of training pattern, Current training pattern is obtained, verification is collected using current training pattern and carries out atural object mark, and is calculated according to loss function Error amount between the annotation results and the true mark of the verification collection;And current training mould is corrected according to the error amount Type.
Preferably, the loss function is that two-value intersects entropy function.
Preferably, the atural object mask method of the satellite image further includes:Test set is carried out using the training pattern Atural object marks, and whether judges the training pattern according to the deviation between the annotation results and the true mark of the test set It is qualified.
Preferably, the atural object mask method of the satellite image further includes:For to the second satellite shadow for completing atural object mark As carrying out expansion process.
Preferably, the size of first satellite image and second satellite image is 512x512 pixels.
Preferably, first satellite image and second satellite image are triple channel image;Described second defends It is single channel image after star image mark.
Preferably, the training pattern is generated using U-net neural networks.
Preferably, the U-net neural networks include the first convolutional layer, pond layer, warp lamination and the second convolution Layer;The U-net neural networks are configured as:It is provided with one layer of pond layer between every two layers of first convolutional layer, most One layer of warp lamination, every two layers of volume Two are provided between the second convolutional layer of the first convolutional layer of later layer and first layer One layer of warp lamination is provided between lamination;For carrying out convolution operation, the pond layer is used for first convolutional layer Pondization operation is carried out, the warp lamination is for carrying out deconvolution operation, and second convolutional layer is for carrying out convolution operation.
Preferably, the U-net neural networks carry out network training using Adam optimization algorithms.
Preferably, steps are as follows for the convolution operation:Size is used to carry out convolution operation, pixel square for the convolution kernel of NxN The size of battle array is MxM, and the full null vector of last column supplement (N-1) row in picture element matrix, last row in picture element matrix are mended It fills (N-1) and arranges full null vector, form the picture element matrix that size is (M+N-1) x (M+N-1), the U-net neural networks are to size Convolution operation is carried out for the picture element matrix of (M+N-1) x (M+N-1).
Preferably, the U-net neural networks further include classification layer, and the classification layer is arranged in last the second convolution of layer After layer, the classification layer is used to convert the output matrix of the second convolutional layer of last layer to probability matrix.
Preferably, the classification layer is turned the output matrix of the second convolutional layer of last layer using Sigmoid activation primitives Turn to probability matrix.
The embodiment of third aspect present invention provides a kind of computer readable storage medium, is stored thereon with computer journey The step of sequence, which realizes the atural object mask method of satellite image when being executed by processor.
The embodiment of fourth aspect present invention provides a kind of computer equipment, including memory, processor and is stored in On memory and the computer program that can run on a processor, the processor realize the satellite shadow when executing described program The step of atural object mask method of picture.
The atural object annotation equipment and method of the satellite image that the embodiment of the present invention is provided, computer readable storage medium, And computer equipment, the automatic marking to satellite image atural object is realized, atural object mark is quick, accuracy rate is high, while to defending The clarity of star image requires relatively low.
The additional aspect and advantage of the present invention will become apparent in following description section, or practice through the invention Recognize.
Description of the drawings
The above-mentioned and/or additional aspect and advantage of the present invention will become in the description from combination following accompanying drawings to embodiment Obviously and it is readily appreciated that, wherein:
Fig. 1 is the structural schematic diagram of the first embodiment of the atural object annotation equipment of satellite image of the present invention;
Fig. 2 is the structural schematic diagram of second of embodiment of the atural object annotation equipment of satellite image of the present invention;
Fig. 3 is the structural representation of the third unit in second of embodiment of the atural object annotation equipment of satellite image of the present invention Figure;
Fig. 4 is the structural representation of Unit the 4th in second of embodiment of the atural object annotation equipment of satellite image of the present invention Figure;
Fig. 5 is the flow chart of the first embodiment of the atural object mask method of satellite image of the present invention;
Fig. 6 is the flow chart of second of embodiment of the atural object mask method of satellite image of the present invention;
Fig. 7 schematically illustrates U- used by second of embodiment of the atural object annotation equipment of satellite image of the present invention Net neural network models;
Fig. 8 schematically illustrates second satellite image;
Fig. 9 shows second of embodiment using the atural object annotation equipment of satellite image of the present invention to Fig. 8 processing Output pattern afterwards.
Wherein, the correspondence in Fig. 1 to Fig. 4 between reference numeral and component names is:1 first unit, 2 second is single Member, 3 third units, 31 cut out module, and 32 labeling modules, 33 preprocessing modules, 34 data enhancing module, Unit 4 the 4th, 41 is defeated Enter module, 42 computing modules, 43 output modules;Unit 6 the 5th, Unit 7 the 6th;In Fig. 7 between reference numeral and component names Correspondence be:51 first convolutional layers;52 pond layers;53 warp laminations;54 second convolutional layers, 55 classification layers.
Specific implementation mode
To better understand the objects, features and advantages of the present invention, below in conjunction with the accompanying drawings and specific real Mode is applied the present invention is further described in detail.It should be noted that in the absence of conflict, the implementation of the application Feature in example and embodiment can be combined with each other.
Many details are elaborated in the following description to facilitate a thorough understanding of the present invention, still, the present invention may be used also To be implemented different from other modes described here using other, therefore, protection scope of the present invention is not by described below Specific embodiment limitation.
Following the discussion provides multiple embodiments of the present invention.Although each embodiment represents the single combination of invention, But different embodiments of the invention can replace, or merge combination, therefore the present invention is it is also contemplated that comprising recorded identical And/or all possible combinations of different embodiments.Thus, if one embodiment includes A, B, C, another embodiment includes B With the combination of D, then the present invention also should be regarded as include the every other possible combinations of one or more containing A, B, C, D reality Example is applied, although the embodiment may not have specific literature record in the following contents.
Fig. 1 is the structural schematic diagram of the first embodiment of the atural object annotation equipment of satellite image of the present invention;Such as Fig. 1 institutes Show, the first embodiment of the atural object annotation equipment of satellite image of the present invention, including first unit 1 and second unit 2;First Unit 1 is configured as obtaining training set, and extracts the atural object mark feature of training set, generates training pattern;Wherein, training set packet Containing multiple the first satellite images that atural object mark is completed;Second unit 2 is configured as obtaining training pattern, and according to training mould Atural object in the second satellite image of type pair is labeled.
The first embodiment of the atural object annotation equipment of satellite image of the present invention is by being arranged first unit 1 and the second list Member 2, first unit 1 obtain training set, and the atural object by extracting training set marks feature, is marked to the atural object for including in training set Feature is learnt, and training pattern is generated;Second unit 2 is connect with first unit 1, is obtained training pattern, is passed through the instruction of acquisition The atural object practiced in the second satellite image of model pair is labeled;On the one hand, it avoids artificial mark to take time and effort, realize quickly Atural object mark automatically is carried out to satellite image;On the other hand, the first satellite image and the second satellite image are satellite image, By the study of the atural object mark feature to the first satellite image, may be implemented to be labeled the atural object of the second satellite image, Training pattern even relatively low in the clarity of satellite image, being generated by the first satellite image, can also be applied In the atural object mark to the second satellite image, the accuracy of atural object mark is higher.
In the first embodiment of the atural object annotation equipment of satellite image of the present invention, training set can be by manually to first Atural object in satellite image is labeled to be formed, can also be directly transfer historical data, such as a certain area satellite image The accurate labeled versions of atural object;It is worth noting that the levels of precision of the atural object mark in training set will directly influence trained mould The precise degrees of type should ensure accurately to mark the atural object in training set as possible.In the training set that first unit 1 is extracted Atural object marks feature:The polymerization of connection relation, atural object between atural object color, atural object shape, each atural object is arranged The edge feature of cloth relationship, atural object;Feature generation training pattern is marked by extracting above-mentioned atural object, is contained in training pattern each Atural object corresponding to a atural object marks feature, to realize the automatic marking to atural object in the second satellite image.
Fig. 2 is the structural schematic diagram of second of embodiment of the atural object annotation equipment of satellite image of the present invention;Such as Fig. 2 institutes Show, in second of embodiment of the atural object annotation equipment of satellite image of the present invention, the atural object annotation equipment of satellite image further includes Third unit 3, third unit 3 are configured as being labeled the atural object in multiple first satellite images according to instruction, form instruction Practice collection;Wherein, the mark that instruction is sent out by atural object mark personnel instructs, and third unit 3 responds mark instruction, to first Satellite image is labeled;For example, the first satellite image is illustrated on touch screen by touch-control screen display system, personnel are marked Atural object in the first satellite image for being illustrated on touch screen is identified, and instruction is sent out to touch screen and (such as circle choosing, is applied Smear the atural object etc. in the first satellite image) carry out the first satellite image in atural object mark.By the way that third unit 3 is arranged, to the Atural object in one satellite image is labeled, and is realized and is established training set by satellite image.
Fig. 3 is the structural representation of the third unit in second of embodiment of the atural object annotation equipment of satellite image of the present invention Figure;As shown in figure 3, in second of embodiment of the atural object annotation equipment of satellite image of the present invention, third unit 3 includes cutting out mould Block 31 and labeling module 32, cut out module 31 and are cut out to obtain several first satellite images, labeling module 32 to satellite image Selected in several first satellite images multiple first satellite images and according to instruction to multiple first satellite images for choosing into Row atural object marks, and forms training set.During forming training set, the data entry format to obtain meeting first unit 1 is wanted The first satellite image asked, needs to be cut out satellite image, forms the first satellite image;Above-mentioned cut out can be to satellite Image specifies specific the cutting out of region progress, can also be that the sequentially array carried out to satellite image is cut out;In the present embodiment, Module 31 is cut out by setting, carrying out sequentially array to satellite image cuts out, and obtains several first satellite images;Select above-mentioned cut Several first satellite image enormous amounts that sanction mode is generated can increase if being labeled to each first satellite image The training difficulty of first unit 1 and training time in the present embodiment, pass through and labeling module 32 are arranged, on the one hand, cutting out mould Block 31 is cut out in several first satellite images to be formed and randomly chooses multiple first satellite images;On the other hand, for selection Atural object in multiple first satellite images is labeled, and forms training set;It is effectively improved the generating rate of training pattern, together When stochastical sampling reliability it is higher, it is ensured that the atural object diversity in training set and atural object abundance.
As shown in figure 3, in second of embodiment of the atural object annotation equipment of satellite image of the present invention, third unit 3 is also wrapped Preprocessing module 33 is included, preprocessing module 33 pre-processes satellite image;Module 31 is cut out to passing through pretreated satellite Image is cut out.The pretreatment operation of preprocessing module 33 includes but not limited to:Satellite image is stretched, to satellite shadow As carrying out even color etc..Satellite image is pre-processed by the way that preprocessing module 33 is arranged, is convenient for subsequent operation.
As shown in figure 3, in second of embodiment of the atural object annotation equipment of satellite image of the present invention, third unit 3 is also wrapped Data enhancing module 34 is included, data enhance module 34 and are used to carry out data enhancing to the first satellite image in training set.Data Enhancing module 34 data enhancement operations include but not limited to:Overturning, Random-Rotation etc..Enhance module 34 by the way that data are arranged, After carrying out atural object mark to first satellite image, enhance the turning operation of module 34 by data, forms one new the One satellite image;The Random-Rotation operation for enhancing module 34 by data, forms two the first new satellite images;Namely It says, after carrying out atural object mark to first satellite image, enhances the data enhancement operations of module 34 by data, form new Three the first satellite images for having carried out atural object mark, to carry out atural object mark to a small amount of the first satellite image, you can Training set is obtained, time and manpower are saved.
As shown in Fig. 2, in second of embodiment of the atural object annotation equipment of satellite image of the present invention, the atural object of satellite image Annotation equipment further includes the 4th unit 4, and the 4th unit 4 is configured as, to the Period Process sampling for generating training pattern, being worked as Preceding training pattern collects verification using current training pattern and carries out atural object mark, and calculates the mark according to loss function As a result the error amount between true mark;First unit 1 obtains error amount, and current training mould is corrected according to error amount Type.
This preferred embodiment obtains the training pattern being currently generated, and the instruction to being currently generated by the way that the 4th unit 4 is arranged Practice model to be assessed, the error amount that assessment obtains is sent to first unit 1, first unit 1 is according to above-mentioned feedback information knot It closes training set and obtains new training pattern, repeat the above process;By the way that the 4th unit 4 is arranged, the accurate of training pattern ensure that Degree.During generating training pattern, such as training set includes 100 the first satellite images, then can divide ten batches of inputs first Unit 1, every batch of include 10 the first satellite images, and first unit 1 carries out atural object to first training set first and marks feature Extraction, generate first training pattern, then the 4th unit 4 first training pattern is assessed, first unit 1 combine miss Difference and second batch training set generate second training pattern, repeat the above process, until completing ten batches of the first satellite shadows The training of picture, obtains training pattern.Wherein, verification collection is the satellite image for having carried out atural object mark, and the 4th unit 4 is according to loss Function calculates error amount of the current training pattern between the annotation results and true mark of verification collection and assesses.
Fig. 4 is the structural representation of Unit the 4th in second of embodiment of the atural object annotation equipment of satellite image of the present invention Figure;As shown in figure 4, the 4th unit 4 includes input module 41, computing module 42 and output module 43;Input module 41 is used for Verification collection and current training pattern are obtained, and verification is collected according to current training pattern and carries out atural object mark;Calculate mould Block 42 calculates the error between the annotation results and the true mark of verification collection for obtaining annotation results, and using loss function Value;Error amount is transferred to first unit 1 by output module 43, is completed according to current training mould by the way that input module 41 is arranged Type, which collects verification, carries out atural object labeling operation, and the calculating of error amount is completed by the way that computing module 42 is arranged, is exported by being arranged Module 43, exports error amount;Above-mentioned module setting and connection effectively assess current training pattern, and right The amendment of training pattern provides data support.
Further, when assessing current training pattern, loss function is preferably that two-value intersects entropy function.Two-value The Evaluated effect for intersecting entropy function is accurate, and it is higher to assess rate.
As shown in Fig. 2, in second of embodiment of the atural object annotation equipment of satellite image of the present invention, the atural object of satellite image Annotation equipment further includes the 5th unit 6, and the 5th unit 6 is configured as using the training pattern acquired in second unit 2 to test Collection carries out atural object mark, and determines that second unit 2 is obtained according to the deviation between the annotation results and the true mark of test set Whether the training pattern taken is qualified.Specifically, if above-mentioned deviation is less than preset threshold value, illustrate the instruction acquired in second unit 2 It is qualified to practice model, if above-mentioned deviation is more than preset threshold value, illustrates that the training pattern acquired in second unit 2 is unqualified, then may be used The extraction that new training set carries out atural object mark feature is inputted, atural object mark feature can also be continued to original training set Extraction, until second unit 2 acquired in training pattern qualification.By the way that the 5th unit 6 is arranged, second unit 2 is obtained The training pattern taken has carried out effective judgement, enhances the mark accuracy to atural object in the second satellite image.
As shown in Fig. 2, in second of embodiment of the atural object annotation equipment of satellite image of the present invention, the atural object of satellite image Annotation equipment further includes the 6th unit 7, and the 6th unit 7 is configured as expanding the second satellite image for completing atural object mark Processing.The clear of atural object is realized to the second satellite image progress expansion process for completing atural object mark by the way that the 6th unit 7 is arranged Clear mark.
In one embodiment, the size of the first satellite image and the second satellite image is 512x512 pixels.Above-mentioned picture Element setting is effectively guaranteed the generation of training pattern, and is marked to the atural object of the second satellite image;To the second satellite shadow After carrying out atural object mark, concatenation can be carried out, the complete satellite image of atural object labeled versions is integrated into.
In one embodiment, the first satellite image and the second satellite image are triple channel image;Second satellite shadow As being single channel image after mark.Satellite image is usually triple channel image (i.e. rgb color image), the second satellite image mark It is afterwards single channel image, single channel image is gray level image, and annotation results can be intuitively shown using single channel image. In one embodiment, the first satellite image can be converted into single channel image after being labeled, and carry out ground to single channel image The extraction of object mark feature can reduce the generated time of training pattern, simultaneously because the contrast of single channel image is higher, the One unit 1 can mark feature to atural object and effectively be extracted, and enhance the accuracy of training pattern.
In second of embodiment of the atural object annotation equipment of satellite image of the present invention, first unit 1 uses U-net nerve nets Network generates training pattern.U-net neural networks can accurately extract the atural object mark feature in training set, and be formed Training pattern;The training set needed simultaneously is less, and the generated time of training pattern is shorter.
Fig. 7 schematically illustrates U- used by second of embodiment of the atural object annotation equipment of satellite image of the present invention Net neural network models;As shown in fig. 7, in second of embodiment of the atural object annotation equipment of satellite image of the present invention, U-net Neural network includes the first convolutional layer 51, pond layer 52, warp lamination 53 and the second convolutional layer 54;U-net neural network quilts It is configured to:It is provided with one layer of pond layer 52 between every two layers first convolutional layers 51, last first convolutional layer of layer and first layer the It is provided with one layer of warp lamination 53 between two convolutional layers, one layer of warp lamination 53 is provided between every two layers second convolutional layers 54; First convolutional layer 51 is for carrying out convolution operation, and pond layer 52 is for carrying out pondization operation, and warp lamination 53 is for carrying out warp Product operation, the second convolutional layer 54 is for carrying out convolution operation.Second of implementation of the atural object annotation equipment of satellite image of the present invention The process that the U-net neural metwork trainings that example uses obtain training pattern is as follows:First, the first satellite image passes through U-net god The first convolutional layer 51 and pond layer 52 through network carry out down-sampling operation, wherein the first convolutional layer of each layer 51 is all made of Convolution kernel carries out process of convolution to the image of entrance, and every layer of first convolutional layer 51 can carry out a convolution, can also continuously into Row multiple convolution, the first satellite image first pass around the first convolutional layer of first layer, and the image formed after convolution enters first layer pond Change layer and carry out pondization operation, the image obtained after pondization operation enters back into the first convolutional layer of the second layer and carries out convolution operation, repeats The above process, until completing down-sampling operation, in the present embodiment, carrying out four down-samplings, (the middle image of each down-sampling passes through Cross one layer of first convolutional layer 51 and one layer of pond layer 52) i.e. it is believed that completing down-sampling operation;After completing down-sampling operation Image carries out up-sampling operation by the warp lamination 53 of U-net neural networks and the second convolutional layer 54 again, in operating process, Image first passes around first layer warp lamination 53 and carries out deconvolution operation, the deconvolution operation of each layer of warp lamination 53 with often The pondization operation of one layer of pond layer 52 corresponds, and the image formed after deconvolution enters the second convolutional layer of first layer and carries out convolution Operation, wherein every layer of second convolutional layer 54 can carry out a convolution, can also be carried out continuously multiple convolution, after convolution operation Obtained image enters back into second layer warp lamination and carries out deconvolution operation, repeats the above process, until completing up-sampling behaviour Make, in this preferred embodiment, carrying out four up-samplings, (image passes through one layer of warp lamination 53 and one in up-sampling every time The second convolutional layer 54 of layer) i.e. it is believed that completing up-sampling operation.When first satellite image passes through above-mentioned U-net neural networks The atural object mark image of the image generated afterwards and first satellite image is compared, convolution kernel by comparison result in real time into Row update.When U-net neural networks are after above-mentioned multiple training, the U-net neural networks (i.e. training pattern) of generation are i.e. Atural object mark can be carried out to the second satellite image.
By the structure setting of above-mentioned U-net neural networks, as training set constantly inputs U-net neural networks, U- Convolution kernel in net neural networks adjusts convolution kernel, convolution in real time with the increase and the feedback mechanism of itself of training set Core is the characterization of feature to be marked to the atural object in training set, therefore by U-net neural networks, may be implemented in training set Atural object mark feature extracts, and forms trained U-net neural networks, then the second satellite image is inputted trained U- Net neural networks, you can realize the automatic marking to the atural object in the second satellite image.In the present embodiment, U-net neural networks Using four layers of down-sampling operation and four layers of up-sampling operation feature can be marked to the atural object in training set and carried out effectively, quickly Extraction, so as to the high training pattern of more fast generation accuracy.
Further, the preferred Adam optimization algorithms of U-net neural networks carry out network training, use Adam optimization algorithms can be with The timeliness of the training of U-net neural networks is promoted, and obtains preferable training effect.
In one embodiment, U-net neural networks use the pixel square that size is MxM to size for the convolution kernel of NxN Battle array carries out convolution operation, in picture element matrix before carrying out convolution operation, in last column supplement (N-1) row complete zero of picture element matrix Vector arranges full null vector and forms the pixel square that size is (M+N-1) x (M+N-1) in last row supplement (N-1) of picture element matrix Battle array, U-net neural networks carry out convolution operation to the picture element matrix that size is (M+N-1) x (M+N-1).It is real by above-mentioned algorithm Show and convolution algorithm carried out to each pixel in MxM picture element matrixs, ensure that the accuracy of mark feature extraction, it is therefore prevented that Information is lost.
In one embodiment, U-net neural networks further include classification layer 55, and classification layer 55 is arranged in last layer second After convolutional layer, classification layer 55 is used to convert the output matrix of the second convolutional layer of last layer to probability matrix.Pass through setting Classification layer 55, converts the output matrix of the second convolutional layer of last layer to probability matrix, each probability is in probability matrix The probability that the point is a certain atural object is corresponded to, by the way that threshold value is previously set, if the probability value in probability matrix at a certain position is super This threshold value is crossed, then judges that the position is a certain atural object.By setting classification layer 55, the satellite shadow of U-net neural networks will be inputted As being converted into probability matrix, intuitively each pixel in satellite image is judged, achieves preferable mark effect.
Further, when assessing current training pattern, two-value intersects entropy function and is defined as f=- Σ [tilog (yi)+(1-ti)log(1-yi)].Wherein tiFor classification, when the pixel atural object mark it is correct when, ti=0;When the pixel When atural object marking error, ti=1;yiIt is constantly excellent in the training process of U-net neural networks for the probability that classification layer 55 exports Change parameter and convolution kernel makes the value that two-value intersects entropy function small as possible, then the training pattern that U-net neural metwork trainings obtain Accurately.
Further, classification layer 55 is converted the output matrix of the second convolutional layer of last layer using Sigmoid activation primitives For probability matrix.Sigmoid activation primitives are f (x)=1/ (1+e-x);Wherein x is the output of last layer, by Sigmoid Output is the value between 0 to 1 after the effect of activation primitive, this value indicates that corresponding position pixel belongs to the general of a certain atural object Rate.
Fig. 8 schematically illustrates second satellite image;Fig. 9 shows the atural object mark using satellite image of the present invention Second of embodiment of dispensing device carries out treated output pattern to Fig. 8.Dress is marked using the atural object of satellite image of the present invention Second of the embodiment set achieves good effect to the atural object mark of satellite image.
Fig. 5 is the flow chart of the first embodiment of the atural object mask method of satellite image of the present invention;As shown in figure 5, this The first embodiment of the atural object mask method of invention satellite image includes the following steps:
T01:The atural object for extracting training set marks feature, generates training pattern;
T02:It is labeled according to the atural object in the second satellite image of training pattern pair;
Wherein, training set includes multiple the first satellite images that atural object mark is completed.
Atural object by extracting training set marks feature and generates training pattern, according in the second satellite image of training pattern pair Atural object be labeled, to realize the automatic marking to atural object in the second satellite image.
Fig. 6 is the flow chart of second of embodiment of the atural object mask method of satellite image of the present invention;As shown in fig. 6, this Include the following steps in second of embodiment of the atural object mask method of invention satellite image:
S01:Satellite image is pre-processed;
S02:It is cut out to carrying out pretreated satellite image, forms several first satellite images;
S03:Selected in several first satellite images it is multiple carry out carry out atural object mark, formed training set;
S04:Data enhancing is carried out to the first satellite image in training set;
S05:The atural object for extracting training sample marks feature, generates training pattern;
S06:Collection calculating error amount is verified by inputting, and combined training sample in error amount input step S05 is generated New training pattern;
S07:Whether training of judgement model is qualified;If training pattern qualification enters step S08, if training pattern is unqualified Return to step S05;
S08:It is labeled using the atural object of the second satellite image of training pattern pair
S09:The second satellite image to completing mark carries out expansion process.
It, can for the atural object mask method embodiment of satellite image, computer equipment embodiment, computer in this specification For reading storage medium embodiment, since it is substantially similar to the atural object annotation equipment embodiment of satellite image, correlation expenditure ginseng The specification part for seeing the atural object annotation equipment embodiment of satellite image describes to avoid repeatability.
The embodiment of another aspect of the present invention provides a kind of computer equipment, including memory, processor and is stored in On memory and the computer program that can run on a processor, processor realize the atural object mark of satellite image when executing program The step of method.
The embodiment of further aspect of the present invention provides a kind of computer readable storage medium, is stored thereon with computer journey Sequence, when which is executed by processor the step of the atural object mask method of realization satellite image.Wherein, computer-readable storage medium Matter can include but is not limited to any type of disk, including floppy disk, CD, DVD, CD-ROM, ROM, RAM, EPROM, EEPROM, DRAM, VRAM, mini drive and disk, flash memory device, magnetic or optical card, nanosystems (including molecular recording Device IC), or it is suitable for any kind of medium or equipment of store instruction and/or data.Processing equipment for example can be personal Any equipment for being suitable for handling data such as computer, general or specialized digital computer, computing device, machine.
In embodiments of the present invention, processor is the control centre of computer system, utilizes various interfaces and connection The various pieces of entire computer system, by running or executing the software program being stored in memory and/or unit, mould Block, and the data being stored in memory are called, to execute the various functions and/or processing data of computer system.Processing Device can be made of integrated circuit (Integrated Circuit, abbreviation IC), such as the IC that can be encapsulated by single is formed, It can also be formed by connecting the encapsulation IC of more identical functions or different function.In embodiments of the present invention, processor can be with Can be that unit calculates core at least one central processing unit (Central Processing Unit, abbreviation CPU), CPU, It can be multioperation core, can be the processor of physical machine, can also be the processor of virtual machine.
Those skilled in the art can be understood that technical scheme of the present invention can be come by software and/or hardware It realizes." module " and " unit " in this specification refers to complete independently or with other component coordinates complete specific function Software and/or hardware, wherein hardware for example can be FPGA (Field-Programmable Gate Array, field-programmables Gate array), IC (Integrated Circuit, integrated circuit).
The atural object annotation equipment and method of the satellite image that the embodiment of the present invention is provided, computer readable storage medium, And computer equipment, the automatic marking to satellite image atural object is realized, atural object mark is quick, accuracy rate is high, while to defending The clarity of star image requires relatively low.
In the present invention, term " first ", " second ", " third " are only used for the purpose of description, and should not be understood as indicating Or imply relative importance;Term " multiple " then refers to two or more, unless otherwise restricted clearly.Term " installation ", The terms such as " connected ", " connection ", " fixation " shall be understood in a broad sense, for example, " connection " may be a fixed connection, can also be can Dismantling connection, or be integrally connected;" connected " can be directly connected, can also be indirectly connected through an intermediary.For this For the those of ordinary skill in field, the specific meanings of the above terms in the present invention can be understood according to specific conditions.
In description of the invention, it is to be understood that the orientation or positional relationship of the instructions such as term "upper", "lower" be based on Orientation or positional relationship shown in the drawings, is merely for convenience of description of the present invention and simplification of the description, rather than indicates or imply institute The device or unit of finger must have specific direction, with specific azimuth configuration and operation, it is thus impossible to be interpreted as to this hair Bright limitation.
In the description of this specification, the description of term " one embodiment ", " some embodiments ", " specific embodiment " etc. Mean that particular features, structures, materials, or characteristics described in conjunction with this embodiment or example are contained at least one reality of the present invention It applies in example or example.In the present specification, schematic expression of the above terms are not necessarily referring to identical embodiment or reality Example.Moreover, description particular features, structures, materials, or characteristics can in any one or more of the embodiments or examples with Suitable mode combines.
It these are only the preferred embodiment of the present invention, be not intended to restrict the invention, for those skilled in the art For member, the invention may be variously modified and varied.Any modification made by all within the spirits and principles of the present invention, Equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.

Claims (37)

1. a kind of atural object annotation equipment of satellite image, which is characterized in that including:
First unit, the first unit are configured as obtaining training set, and extract the atural object mark feature of the training set, raw At training pattern;Wherein, the training set includes multiple the first satellite images that atural object mark is completed;
Second unit, the second unit is configured as obtaining the training pattern, and is defended according to the training pattern pair second Atural object in star image is labeled.
2. the atural object annotation equipment of satellite image according to claim 1, which is characterized in that the atural object of the satellite image Annotation equipment further includes third unit, and the third unit is configured as according to instruction in multiple first satellite images Atural object is labeled, and forms the training set.
3. the atural object annotation equipment of satellite image according to claim 2, which is characterized in that the third unit includes:
Module is cut out, the module of cutting out is cut out satellite image to obtain several first satellite images;
Labeling module, the labeling module select multiple first satellite images in several first satellite images and according to instructions pair Multiple first satellite images chosen carry out atural object mark, form the training set.
4. the atural object annotation equipment of satellite image according to claim 3, which is characterized in that the third unit further includes Preprocessing module, the preprocessing module pre-process the satellite image;The module of cutting out is to by pretreated Satellite image is cut out.
5. the atural object annotation equipment of satellite image according to claim 3, which is characterized in that the third unit further includes Data enhance module, and the data enhancing module is used to carry out data enhancing to the first satellite image in the training set.
6. the atural object annotation equipment of satellite image according to claim 1, which is characterized in that the atural object of the satellite image Annotation equipment further includes Unit the 4th, and Unit the 4th is configured as, to the Period Process sampling for generating training pattern, obtaining Current training pattern collects verification using current training pattern and carries out atural object mark, and calculates the mark according to loss function Note the error amount between result and the true mark of the verification collection;The first unit is corrected currently according to the error amount Training pattern.
7. the atural object annotation equipment of satellite image according to claim 6, which is characterized in that Unit the 4th includes defeated Enter module, computing module and output module;The input module verifies collection and current training pattern for obtaining, and Verification is collected according to current training pattern and carries out atural object mark;The computing module utilizes damage for obtaining annotation results Lose the error amount between the true mark that function calculates the annotation results and verification collects;The output module passes the error amount It is handed to the first unit.
8. the atural object annotation equipment of satellite image according to claim 7, which is characterized in that the loss function is two-value Intersect entropy function.
9. the atural object annotation equipment of satellite image according to claim 1, which is characterized in that the atural object of the satellite image Annotation equipment further includes Unit the 5th, and Unit the 5th is configured as utilizing the training pattern pair acquired in the second unit Test set carries out atural object mark, and determines described the according to the deviation between the true mark of the annotation results and the test set Whether the training pattern acquired in Unit two is qualified.
10. the atural object annotation equipment of satellite image according to claim 1, which is characterized in that the ground of the satellite image Object annotation equipment further includes Unit the 6th, and Unit the 6th is configured as carrying out the second satellite image for completing atural object mark Expansion process.
11. the atural object annotation equipment of satellite image according to claim 1, which is characterized in that first satellite image Size with second satellite image is 512x512 pixels.
12. the atural object annotation equipment of satellite image according to claim 1, which is characterized in that first satellite image And second satellite image is triple channel image;It is single channel image after the second satellite image mark.
13. the atural object annotation equipment of the satellite image according to any one of claim 1-12, which is characterized in that described First unit generates the training pattern using U-net neural networks.
14. the atural object annotation equipment of satellite image according to claim 13, which is characterized in that the U-net nerve nets Network includes the first convolutional layer, pond layer, warp lamination and the second convolutional layer;The U-net neural networks are configured as:Often One layer of pond layer is provided between two layers of first convolutional layer, last first convolutional layer of layer and the second convolution of first layer It is provided with one layer of warp lamination between layer, one layer of warp lamination is provided between every two layers of second convolutional layer; For carrying out convolution operation, the pond layer operates first convolutional layer for carrying out pondization, the warp lamination for into Row deconvolution operates, and second convolutional layer is for carrying out convolution operation.
15. the atural object annotation equipment of satellite image according to claim 14, which is characterized in that the U-net nerve nets Network carries out network training using Adam optimization algorithms.
16. the atural object annotation equipment of satellite image according to claim 14, which is characterized in that the U-net nerve nets Network uses size to carry out convolution operation for the convolution kernel of NxN, and the size of picture element matrix is MxM, then the picture element matrix into Before row convolution operation, the full null vector of last column supplement (N-1) row in picture element matrix, last row supplement in picture element matrix (N-1) it arranges full null vector and forms the picture element matrix that size is (M+N-1) x (M+N-1), the U-net neural networks are to size (M+N-1) picture element matrix of x (M+N-1) carries out convolution operation.
17. the atural object annotation equipment of satellite image according to claim 14, which is characterized in that the U-net nerve nets Network further includes classification layer, and the classification layer is arranged after last second convolutional layer of layer, and the classification layer is used for last The output matrix of the second convolutional layer of layer is converted into probability matrix.
18. the atural object annotation equipment of satellite image according to claim 17, which is characterized in that the classification layer uses Sigmoid activation primitives convert the output matrix of the second convolutional layer of last layer to probability matrix.
19. a kind of atural object mask method of satellite image, which is characterized in that including:
The atural object for extracting training set marks feature, generates training pattern;The training set is completed what atural object marked comprising multiple First satellite image;
It is labeled according to the atural object in the second satellite image of the training pattern pair.
20. the atural object mask method of satellite image according to claim 19, which is characterized in that the training set passes through such as Lower step obtains:The atural object in multiple first satellite images is labeled according to instruction.
21. the atural object mask method of satellite image according to claim 20, which is characterized in that the training set passes through such as Lower step obtains:Satellite image is cut out to obtain several first satellite images, is selected in several first satellite images more A first satellite image simultaneously carries out atural object mark according to instruction to multiple first satellite images chosen, and forms the training set.
22. the atural object mask method of satellite image according to claim 21, which is characterized in that the ground of the satellite image Object mask method further includes:The satellite image is pre-processed before satellite image is cut out.
23. the atural object mask method of satellite image according to claim 21, which is characterized in that the ground of the satellite image Object mask method further includes:Data enhancing is carried out to the first satellite image in the training set.
24. the atural object mask method of satellite image according to claim 19, which is characterized in that the ground of the satellite image Object mask method further includes:To generating the Period Process sampling of training pattern, current training pattern is obtained, current instruction is utilized Practice model and progress atural object mark is collected to verification, and calculates the true mark of the annotation results and the verification collection according to loss function Between error amount;And current training pattern is corrected according to the error amount.
25. the atural object mask method of satellite image according to claim 24, which is characterized in that the loss function is two Value intersects entropy function.
26. the atural object mask method of satellite image according to claim 19, which is characterized in that the ground of the satellite image Object mask method further includes:Using the training pattern to test set carry out atural object mark, and according to the annotation results with it is described Deviation between the true mark of test set judges whether the training pattern is qualified.
27. the atural object mask method of satellite image according to claim 19, which is characterized in that the ground of the satellite image Object mask method further includes:To carry out expansion process to the second satellite image for completing atural object mark.
28. the atural object mask method of satellite image according to claim 19, which is characterized in that first satellite image Size with second satellite image is 512x512 pixels.
29. the atural object mask method of satellite image according to claim 19, which is characterized in that first satellite image And second satellite image is triple channel image;It is single channel image after the second satellite image mark.
30. the atural object mask method of the satellite image according to any one of claim 19-29, which is characterized in that adopt The training pattern is generated with U-net neural networks.
31. the atural object mask method of satellite image according to claim 30, which is characterized in that the U-net nerve nets Network includes the first convolutional layer, pond layer, warp lamination and the second convolutional layer;The U-net neural networks are configured as:Often One layer of pond layer is provided between two layers of first convolutional layer, last first convolutional layer of layer and the second convolution of first layer It is provided with one layer of warp lamination between layer, one layer of warp lamination is provided between every two layers of second convolutional layer; For carrying out convolution operation, the pond layer operates first convolutional layer for carrying out pondization, the warp lamination for into Row deconvolution operates, and second convolutional layer is for carrying out convolution operation.
32. the atural object mask method of satellite image according to claim 31, which is characterized in that the U-net nerve nets Network carries out network training using Adam optimization algorithms.
33. the atural object mask method of satellite image according to claim 31, which is characterized in that the convolution operation step It is as follows:Size is used to carry out convolution operation for the convolution kernel of NxN, the size of picture element matrix is MxM, in last of picture element matrix It goes and supplements the full null vector of (N-1) row, last full null vector of row supplement (N-1) row in picture element matrix, form size as (M+N- 1) picture element matrix of x (M+N-1), the U-net neural networks roll up the picture element matrix that size is (M+N-1) x (M+N-1) Product operation.
34. the atural object mask method of satellite image according to claim 31, which is characterized in that the U-net nerve nets Network further includes classification layer, and the classification layer is arranged after last second convolutional layer of layer, and the classification layer is used for last The output matrix of the second convolutional layer of layer is converted into probability matrix.
35. the atural object mask method of satellite image according to claim 34, which is characterized in that the classification layer uses Sigmoid activation primitives convert the output matrix of the second convolutional layer of last layer to probability matrix.
36. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor It is realized when execution such as the step of any one of claim 19-35 the method.
37. a kind of computer equipment, including memory, processor and storage are on a memory and the meter that can run on a processor Calculation machine program, which is characterized in that the processor is realized when executing described program as described in any one of claim 19-35 The step of method.
CN201810147308.7A 2018-02-12 2018-02-12 The atural object annotation equipment and method of satellite image Pending CN108470185A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810147308.7A CN108470185A (en) 2018-02-12 2018-02-12 The atural object annotation equipment and method of satellite image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810147308.7A CN108470185A (en) 2018-02-12 2018-02-12 The atural object annotation equipment and method of satellite image

Publications (1)

Publication Number Publication Date
CN108470185A true CN108470185A (en) 2018-08-31

Family

ID=63265987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810147308.7A Pending CN108470185A (en) 2018-02-12 2018-02-12 The atural object annotation equipment and method of satellite image

Country Status (1)

Country Link
CN (1) CN108470185A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109784228A (en) * 2018-12-28 2019-05-21 苏州易助能源管理有限公司 A kind of photovoltaic plant identifying system and method based on image recognition technology
CN109977921A (en) * 2019-04-11 2019-07-05 广东电网有限责任公司 A kind of transmission line of electricity perils detecting method
CN111142150A (en) * 2020-01-06 2020-05-12 中国石油化工股份有限公司 Automatic intelligent obstacle avoidance design method for seismic exploration
CN111753887A (en) * 2020-06-09 2020-10-09 军事科学院***工程研究院后勤科学与技术研究所 Point source target image control point detection model training method and device
CN113269215A (en) * 2020-02-17 2021-08-17 百度在线网络技术(北京)有限公司 Method, device, equipment and storage medium for constructing training set
CN113762222A (en) * 2021-11-08 2021-12-07 阿里巴巴达摩院(杭州)科技有限公司 Method and device for processing surface feature elements, storage medium and processor

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955702A (en) * 2014-04-18 2014-07-30 西安电子科技大学 SAR image terrain classification method based on depth RBF network
CN107016665A (en) * 2017-02-16 2017-08-04 浙江大学 A kind of CT pulmonary nodule detection methods based on depth convolutional neural networks
CN107273502A (en) * 2017-06-19 2017-10-20 重庆邮电大学 A kind of image geographical marking method learnt based on spatial cognition
CN107330405A (en) * 2017-06-30 2017-11-07 上海海事大学 Remote sensing images Aircraft Target Recognition based on convolutional neural networks
CN107424152A (en) * 2017-08-11 2017-12-01 联想(北京)有限公司 The detection method and electronic equipment of organ lesion and the method and electronic equipment for training neuroid
CN107491721A (en) * 2017-05-05 2017-12-19 北京佳格天地科技有限公司 Classification of remote-sensing images device and method
CN107680088A (en) * 2017-09-30 2018-02-09 百度在线网络技术(北京)有限公司 Method and apparatus for analyzing medical image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955702A (en) * 2014-04-18 2014-07-30 西安电子科技大学 SAR image terrain classification method based on depth RBF network
CN107016665A (en) * 2017-02-16 2017-08-04 浙江大学 A kind of CT pulmonary nodule detection methods based on depth convolutional neural networks
CN107491721A (en) * 2017-05-05 2017-12-19 北京佳格天地科技有限公司 Classification of remote-sensing images device and method
CN107273502A (en) * 2017-06-19 2017-10-20 重庆邮电大学 A kind of image geographical marking method learnt based on spatial cognition
CN107330405A (en) * 2017-06-30 2017-11-07 上海海事大学 Remote sensing images Aircraft Target Recognition based on convolutional neural networks
CN107424152A (en) * 2017-08-11 2017-12-01 联想(北京)有限公司 The detection method and electronic equipment of organ lesion and the method and electronic equipment for training neuroid
CN107680088A (en) * 2017-09-30 2018-02-09 百度在线网络技术(北京)有限公司 Method and apparatus for analyzing medical image

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109784228A (en) * 2018-12-28 2019-05-21 苏州易助能源管理有限公司 A kind of photovoltaic plant identifying system and method based on image recognition technology
CN109977921A (en) * 2019-04-11 2019-07-05 广东电网有限责任公司 A kind of transmission line of electricity perils detecting method
CN109977921B (en) * 2019-04-11 2022-02-11 广东电网有限责任公司 Method for detecting hidden danger of power transmission line
CN111142150A (en) * 2020-01-06 2020-05-12 中国石油化工股份有限公司 Automatic intelligent obstacle avoidance design method for seismic exploration
CN113269215A (en) * 2020-02-17 2021-08-17 百度在线网络技术(北京)有限公司 Method, device, equipment and storage medium for constructing training set
CN113269215B (en) * 2020-02-17 2023-08-01 百度在线网络技术(北京)有限公司 Training set construction method, device, equipment and storage medium
CN111753887A (en) * 2020-06-09 2020-10-09 军事科学院***工程研究院后勤科学与技术研究所 Point source target image control point detection model training method and device
CN111753887B (en) * 2020-06-09 2024-05-28 军事科学院***工程研究院后勤科学与技术研究所 Point source target image control point detection model training method and device
CN113762222A (en) * 2021-11-08 2021-12-07 阿里巴巴达摩院(杭州)科技有限公司 Method and device for processing surface feature elements, storage medium and processor

Similar Documents

Publication Publication Date Title
CN108470185A (en) The atural object annotation equipment and method of satellite image
Wang et al. Detect globally, refine locally: A novel approach to saliency detection
CN105139395B (en) SAR image segmentation method based on small echo pond convolutional neural networks
CN111833237B (en) Image registration method based on convolutional neural network and local homography transformation
CN114092833B (en) Remote sensing image classification method and device, computer equipment and storage medium
CN108764263A (en) The atural object annotation equipment and method of remote sensing image
CN112464766B (en) Automatic farmland land identification method and system
CN114463637B (en) Winter wheat remote sensing identification analysis method and system based on deep learning
CN109410316A (en) Method, tracking, relevant apparatus and the storage medium of the three-dimensional reconstruction of object
CN110222604A (en) Target identification method and device based on shared convolutional neural networks
CN109426773A (en) A kind of roads recognition method and device
CN110827312A (en) Learning method based on cooperative visual attention neural network
CN110298281A (en) Video structural method, apparatus, electronic equipment and storage medium
CN111967401A (en) Target detection method, device and storage medium
CN110163864A (en) Image partition method, device, computer equipment and storage medium
CN109255382A (en) For the nerve network system of picture match positioning, method and device
CN112669448A (en) Virtual data set development method, system and storage medium based on three-dimensional reconstruction technology
CN111160501A (en) Construction method and device of two-dimensional code training data set
CN113449878B (en) Data distributed incremental learning method, system, equipment and storage medium
CN117671509A (en) Remote sensing target detection method and device, electronic equipment and storage medium
CN111914596A (en) Lane line detection method, device, system and storage medium
Bi et al. Multi-scale weighted fusion attentive generative adversarial network for single image de-raining
CN116597317A (en) Remote sensing image change detection data generation method, device, equipment and medium
CN107368847A (en) A kind of crop leaf diseases recognition methods and system
CN112766481B (en) Training method and device for neural network model and image detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180831

RJ01 Rejection of invention patent application after publication