CN113487738A - Building based on virtual knowledge migration and shielding area monomer extraction method thereof - Google Patents

Building based on virtual knowledge migration and shielding area monomer extraction method thereof Download PDF

Info

Publication number
CN113487738A
CN113487738A CN202110707259.XA CN202110707259A CN113487738A CN 113487738 A CN113487738 A CN 113487738A CN 202110707259 A CN202110707259 A CN 202110707259A CN 113487738 A CN113487738 A CN 113487738A
Authority
CN
China
Prior art keywords
building
image
sim
label
simulation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110707259.XA
Other languages
Chinese (zh)
Other versions
CN113487738B (en
Inventor
闫奕名
杨柳青
宿南
冯收
赵春晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN202110707259.XA priority Critical patent/CN113487738B/en
Publication of CN113487738A publication Critical patent/CN113487738A/en
Application granted granted Critical
Publication of CN113487738B publication Critical patent/CN113487738B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Geometry (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a building and an occlusion area individualized extraction method based on virtual knowledge migration, relates to the field of remote sensing image information extraction, and aims to solve the problem that in building information extraction, training samples are insufficient, and the target condition and the occlusion condition both have high uncertainty of perspective example segmentation. A virtual knowledge generation module is introduced to automatically acquire a large amount of training data which are marked by real shielding conditions, have similar semantic relations and are comprehensively covered by observation angles, so that the problem of insufficient training samples is solved. By adopting a strategy of combining the example segmentation and the shielding judgment module and matching with the characteristic pyramid network, the problem of the diversity of the shapes, the scales and the shielding conditions of the buildings is solved, and the accuracy of the perspective example segmentation of the buildings is high.

Description

Building based on virtual knowledge migration and shielding area monomer extraction method thereof
Technical Field
The invention relates to the field of remote sensing image information extraction, in particular to the technical field of building scene simulation and building information extraction in remote sensing images.
Background
The building information extraction technology based on the visible light remote sensing image is an important research subject in the fields of national defense and civil life. Most of the related researches are carried out by using the ortho remote sensing images at present, however, in some special cases, a sufficient number of ortho images may not be obtained, and in contrast, the oblique remote sensing images are easier to obtain, so that in practical application, the research on the oblique remote sensing images plays a very important role.
With the continuous application of the deep learning theory in the aspect of remote sensing image processing, target detection, semantic segmentation and example segmentation become new ideas of remote sensing building information extraction, wherein the example segmentation integrates the target detection and the semantic segmentation, can distinguish the category of each pixel and detect the object boundary one by one, extracts the building in the image, and is an important method in the field of building information extraction. In 2016, the scholars proposed the atmospheric impact Segmentation (currently, there is information translated into perspective Instance Segmentation), which means that the outlines of the occluded parts of objects in the scene are further predicted on the basis of the conventional Instance Segmentation. In the oblique remote sensing image, the shielding condition is inevitable, if the shielded building outline can be predicted by a perspective example segmentation method, the three-dimensional structure of the building is restrained, meanwhile, the texture information of the building lost due to shielding is recovered, and the method has great significance for the research of the related field of remote sensing image buildings.
Due to the complexity of the task of the perspective example segmentation technology of the oblique remote sensing image, two problems are to be solved:
the buildings are various in shapes and different in size, and most of example segmentation strategies based on detection frames are difficult to set in a targeted mode. Meanwhile, in images with different observation angles, the shielding condition is very complex, and the accuracy of the partition of the building perspective example is not easy to guarantee.
And secondly, the data labeling work for the perspective example segmentation is very complicated and is not easy to obtain in large quantity, and the difficulty of the perspective example segmentation is further improved under the condition of insufficient training samples.
Disclosure of Invention
The purpose of the invention is: aiming at the problems of insufficient building perspective example segmentation training samples and low accuracy in the prior art, a building and a shielding area monomer extraction method thereof based on virtual knowledge migration are provided.
The technical scheme adopted by the invention to solve the technical problems is as follows: the building and the shielding area monomer extraction method based on virtual knowledge migration specifically comprises the following steps:
the method comprises the following steps: acquiring simulation scene V of scene to be identified_SimAnd simulation scenario V_SimTag V of_LabelThen using the simulation scenario V_SimAnd simulation scenario V_SimTag V of_LabelObtaining a simulation image SImageAnd a simulation image SImageCorresponding true value SLabelFinally, using the simulation image SImageAnd a simulation image SImageCorresponding true value SLabelForming virtual knowledge Ksim
Step two: according to virtual knowledge KsimObtaining detection frame areas Box and simulation images S of the whole and vertical surfaces of a buildingImageOf each instance rkAnd generating the Bases of the whole image including the base s of each examplek
Step three: each example substrate skWith an attention map r for each examplekMerging to obtain mask prediction mdThen judging whether shielding areas exist in detection frame areas Box of the whole building and the vertical face, if not, predicting m by using a maskdAs a result of the perspective example segmentation, if an occlusion region exists, the occlusion mask m of the occlusion region is predictedoccThen predict m with the maskdAnd a block mask moccPerforming a comprehensive mask, and taking the result of the comprehensive mask as a perspective example segmentation result;
step four: and obtaining a pre-training model pre _ model according to the steps, utilizing the pre-training model pre _ model and adding part of real remote sensing image training samples for transfer learning to obtain a perspective instance segmentation model final _ model, and then utilizing the perspective instance segmentation model final _ model to finish the monomer extraction of the building and the shielding area thereof.
Further, the simulation scene V_SimThe method is obtained by mapping the terrain, the surface feature and the texture information of a scene to be identified.
Further, the simulation image SImageAnd corresponding true value SLabelBy matching simulation scene V_SimAnd label V thereof_LabelAnd processing the same projection model P to obtain the projection model.
Further, in the second step, the detection frame areas Boxes of the whole and the vertical face of the building and the attention map r of each example in the imagekObtained by means of a target detector based on the FCOS.
Further, the base Bases for generating the whole image in the second step are realized by a BlendMask network.
Further, the base s for each example in the third stepkAnd attention-seeking drawing rkMerging is achieved by a blendshield strategy based on BlendMask.
And further, judging whether the blocking areas exist in the detection frame areas Box of the whole building and the vertical surface in the third step or not through a blocking judgment network.
Further, the step three is to predict the shielding mask m of the shielding regionoccBy the occlusion discrimination network.
Further, the virtual knowledge KsimExpressed as:
Ksim={SImage,SLabel}
SImage=P(V_Sim{Terrain,Object,Texture_Sim})
S_Label=P(V_Label{Terrain,Object,Texture_Label})
wherein P represents a projectionModel, Terrain by Tertain, ground Object by Object, Texture by Texture_SimRepresenting simulated Texture, Texture_LabelRepresenting the tag texture.
Further, the mask predicts mdExpressed as:
Figure RE-GDA0003221907660000031
where K is the set number of bases, K represents the number of bases, D is the number of all predicted detection boxes, D represents the number of detection boxes, s represents the base, r represents the attention diagram, and omicron represents the product by element.
The invention has the beneficial effects that:
the invention provides a building and shielding area single extraction method based on virtual knowledge migration, and aims to solve the problem of perspective example segmentation that training samples are insufficient and target conditions and shielding conditions have high uncertainty in building information extraction. A virtual knowledge generation module is introduced to automatically acquire a large amount of training data which are marked by 'real' shielding conditions, have similar semantic relations and are comprehensively covered by observation angles, so that the problem of insufficient training samples is solved. By adopting a strategy of combining the example segmentation and the shielding judgment module and matching with the characteristic pyramid network, the problem of the diversity of the shapes, the scales and the shielding conditions of the buildings is solved, and the accuracy of the perspective example segmentation of the buildings is high.
Drawings
FIG. 1 is an overall flow chart of the present application;
FIG. 2 is a schematic diagram of a module for generating virtual knowledge;
fig. 3 is a schematic diagram of a perspective example split network module.
Detailed Description
It should be noted that, in the present invention, the embodiments disclosed in the present application may be combined with each other without conflict.
The first embodiment is as follows: specifically, referring to fig. 1, the building and its shaded area unitization extraction method based on virtual knowledge migration according to the present embodiment includes the following steps:
the method comprises the following steps: acquiring simulation scene V of scene to be identified_SimAnd simulation scenario V_SimTag V of_LabelThen using the simulation scenario V_SimAnd simulation scenario V_SimTag V of_Label obtaining a simulation image SImageAnd a simulation image SImageCorresponding true value SLabelFinally, using the simulation image SImageAnd a simulation image SImageCorresponding true value SLabelForming virtual knowledge Ksim
Step two: according to virtual knowledge KsimObtaining detection frame areas Box and simulation images S of the whole and vertical surfaces of a buildingImageOf each instance rkAnd generating the Bases of the whole image including the base s of each examplekAn example refers to a single target object. And generating the base Bases of the whole image according to a part of the blendmask network structure, and then obtaining the base Bases of the whole image after generating the characteristic pyramid of the image. The Blendmask has a part called FCOS detector, which is composed of three parts, namely a backbone network, a feature pyramid, and a Detection Head. After the image enters the network, the first part of the FCOS detector enters (i.e. the three parts of the backbone network, the feature pyramid network and the Detection Head are entered in turn). The generation module of virtual knowledge is shown in fig. 2.
The image enters a backbone network for feature extraction, the output feature graph is sent to a feature pyramid network to obtain feature graphs of different scales to form a feature pyramid, and the base Bases of the whole image can be output after the feature pyramid.
Step three: each example substrate skWith an attention map r for each examplekMerging to obtain mask prediction mdThen judging whether shielding areas exist in detection frame areas Box of the whole and vertical surfaces of the building or not, and if not, judging whether shielding areas exist in the detection frame areas Box of the whole and vertical surfaces of the building or notIn the occlusion region, the mask is predicted to be mdAs a result of the perspective example segmentation, if an occlusion region exists, the occlusion mask m of the occlusion region is predictedoccThen predict m with the maskdAnd a block mask moccPerforming a comprehensive mask, and taking the result of the comprehensive mask as a perspective example segmentation result;
step four: and obtaining a pre-training model pre _ model according to the steps, utilizing the pre-training model pre _ model and adding part of real remote sensing image training samples for transfer learning to obtain a perspective instance segmentation model final _ model, and then utilizing the perspective instance segmentation model final _ model to finish the monomer extraction of the building and the shielding area thereof. A perspective example split network module is shown in fig. 3.
The part of real remote sensing image training samples in the step can be trained to obtain a model only by expressing the method without a large number of real remote sensing images. Because normally, a large number of sample images are needed for good results when training any network, however, in reality, a large number of sample images are not necessarily available, and labeling of real samples is difficult, so that a virtual sample is selected to train the images (omitting the image labeling process) and then the images are migrated into the real images. This solves the problem of insufficient real samples, which is also one of the problems solved by this patent.
That is, partially, means that the application requires very few real images to obtain the result, relative to the large number of images that would otherwise be required.
A virtual knowledge generation module is introduced to automatically acquire a large amount of training data which are marked by real shielding conditions, have similar semantic relations and are comprehensively covered by observation angles, so that the problem of insufficient training samples is solved. The method adopts a strategy of combining a single-stage example segmentation method BlendMask with a shielding judgment module and is matched with a characteristic pyramid network to solve the problem of diversity of building shapes, scales and shielding conditions, and the BlendMask has excellent performance in reasoning speed and detection of small targets and objects separated by shielding. And finally, performing transfer learning by combining a pre-training model obtained by virtual sample training with a real sample to obtain a training model facing the real sample.
In addition, in order to realize the split processing of the top surface and the vertical surface of the building, the vertical surface of the building is greatly influenced by the observation angle, and the vertical surface of the building is not suitable to be independently divided as an example, so that the top surface and the whole building are treated as two types of examples, and the top surface and the vertical surface pixels of each building can be further obtained according to the subordination relation.
The second embodiment is as follows: the present embodiment is a further description of the first embodiment, and the difference between the present embodiment and the first embodiment is that the simulation scene V is described_SimThe method is obtained by mapping the terrain, the surface feature and the texture information of a scene to be identified.
The third concrete implementation mode: the present embodiment is a further description of the first embodiment, and the difference between the present embodiment and the first embodiment is that the simulation image S isImageAnd the corresponding true value group-Truth passes through the simulation scene V_SimAnd label V thereof_LabelAnd processing the same projection model P to obtain the projection model.
The fourth concrete implementation mode: the second step is to detect the whole and vertical faces of the building in the frame areas box and the attention map r of each example in the imagekObtained by means of a target detector based on the FCOS.
The fifth concrete implementation mode: the present embodiment is a further description of the first embodiment, and the difference between the present embodiment and the first embodiment is that the base Bases for generating the whole image in the second step is implemented by a BlendMask network.
The sixth specific implementation mode: this embodiment is a further description of the first embodiment, and the difference between this embodiment and the first embodiment is that the substrate s of each example is used in the third stepkAnd attention-seeking drawing rkMerging is achieved by a blendshield strategy based on BlendMask.
The seventh embodiment: the present embodiment is further described with respect to the first embodiment, and the difference between the present embodiment and the first embodiment is that the determination of whether or not there is a block area in the detection frame areas Boxes of the entire and vertical surfaces of the building in the step three is performed by a block determination network.
The specific implementation mode is eight: this embodiment is a further description of the first embodiment, and the difference between this embodiment and the first embodiment is that the occlusion mask m for predicting the occlusion region in the third stepoccBy the occlusion discrimination network.
The specific implementation method nine: this embodiment is a further description of the first embodiment, and the difference between this embodiment and the first embodiment is that the virtual knowledge K is describedsimExpressed as:
Ksim={SImage,SLabel}
SImage=P(V_Sim{Terrain,Object,Texture_Sim})
S_Label=P(V_Label{Terrain,Object,Texture_Label})
wherein P represents a projection model, Tertain represents a Terrain, Object represents a ground Object, Texture represents Texture information, Texture represents Texture, and so on_SimRepresenting simulated Texture, Texture_LabelRepresenting the tag texture.
The detailed implementation mode is ten: this embodiment is a further description of the first embodiment, and the difference between this embodiment and the first embodiment is the mask prediction mdExpressed as:
Figure RE-GDA0003221907660000061
wherein K is the number of the set bases, K represents the serial number of the bases, D is the number of all the predicted detection frames, D represents the serial number of the detection frame, s represents the base, r represents the attention map,
Figure RE-GDA0003221907660000062
representing a product by element.
It should be noted that the detailed description is only for explaining and explaining the technical solution of the present invention, and the scope of protection of the claims is not limited thereby. It is intended that all such modifications and variations be included within the scope of the invention as defined in the following claims and the description.

Claims (10)

1. A building and a shielding area monomer extraction method thereof based on virtual knowledge migration is characterized by comprising the following steps:
the method comprises the following steps: acquiring simulation scene V of scene to be identified_SimAnd simulation scenario V_SimTag V of_LabelThen using the simulation scenario V_SimAnd simulation scenario V_SimTag V of_LabelObtaining a simulation image SImageAnd a simulation image SImageCorresponding true value SLabelFinally, using the simulation image SImageAnd a simulation image SImageCorresponding true value SLabelForming virtual knowledge Ksim
Step two: according to virtual knowledge KsimObtaining detection frame areas Box and simulation images S of the whole and vertical surfaces of a buildingImageOf each instance rkAnd generating the Bases of the whole image including the base s of each examplek
Step three: each example substrate skWith an attention map r for each examplekMerging to obtain mask prediction mdThen judging whether shielding areas exist in detection frame areas Box of the whole building and the vertical face, if not, predicting m by using a maskdAs a result of the perspective example segmentation, if an occlusion region exists, the occlusion mask m of the occlusion region is predictedoccThen predict m with the maskdAnd a block mask moccPerforming a comprehensive mask, and taking the result of the comprehensive mask as a perspective example segmentation result;
step four: and obtaining a pre-training model pre _ model according to the steps, utilizing the pre-training model pre _ model and adding part of real remote sensing image training samples for transfer learning to obtain a perspective instance segmentation model final _ model, and then utilizing the perspective instance segmentation model final _ model to finish the monomer extraction of the building and the shielding area thereof.
2. The building and the shielding area monomer extraction method based on virtual knowledge migration of claim 1, wherein the simulation scene V is_SimThe method is obtained by mapping the terrain, the surface feature and the texture information of a scene to be identified.
3. The building and its shaded area monomer extraction method based on virtual knowledge migration of claim 1, characterized in that the simulation image SImageAnd corresponding true value SLabelBy matching simulation scene V_SimAnd label V thereof_LabelAnd processing the same projection model P to obtain the projection model.
4. The method for extracting the building and the shielding area monomer based on the virtual knowledge migration as claimed in claim 1, wherein in the second step, the detection frame areas Box of the whole and the facade of the building and the attention map r of each instance in the imagekObtained by means of a target detector based on the FCOS.
5. The building and its shaded area monomer extraction method based on virtual knowledge migration of claim 1, wherein the base Bases for generating the whole image in the second step is implemented by BlendMask network.
6. The method for the unified extraction of buildings and their sheltered areas based on virtual knowledge migration according to claim 1, characterized in that the step three is a base s for each instancekAnd attention-seeking drawing rkMerging is achieved by a blendshield strategy based on BlendMask.
7. The building and the shielding area unitization extraction method based on virtual knowledge migration according to claim 1, wherein the step three is performed by a shielding judgment network for judging whether shielding areas exist in detection frame areas Box of the whole and vertical surfaces of the building.
8. The building and its shielding area monomer extraction method based on virtual knowledge migration of claim 1, wherein the shielding mask m of the shielding area is predicted in the third stepoccBy the occlusion discrimination network.
9. The building and the shielding area monomer extraction method based on virtual knowledge migration according to claim 1, wherein the virtual knowledge K is obtained by performing a transformation on the building and the shielding area monomer extraction methodsimExpressed as:
Ksim={SImage,SLabel}
SImage=P(V_Sim{Terrain,Object,Texture_Sim})
S_Label=P(V_Label{Terrain,Object,Texture_Label})
wherein P represents a projection model, Tertain represents a Terrain, Object represents a ground Object, Texture represents Texture information, Texture represents Texture, and so on_SimRepresenting simulated Texture, Texture_LabelRepresenting the tag texture.
10. The method of claim 1, wherein the mask prediction m is a prediction of the blocking area of the buildingdExpressed as:
Figure FDA0003131854670000021
wherein K is the number of the set bases, K represents the serial number of the bases, D is the number of all the predicted detection frames, D represents the serial number of the detection frame, s represents the base, r represents the attention map,
Figure FDA0003131854670000022
representing a product by element.
CN202110707259.XA 2021-06-24 2021-06-24 Building based on virtual knowledge migration and shielding area monomer extraction method thereof Active CN113487738B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110707259.XA CN113487738B (en) 2021-06-24 2021-06-24 Building based on virtual knowledge migration and shielding area monomer extraction method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110707259.XA CN113487738B (en) 2021-06-24 2021-06-24 Building based on virtual knowledge migration and shielding area monomer extraction method thereof

Publications (2)

Publication Number Publication Date
CN113487738A true CN113487738A (en) 2021-10-08
CN113487738B CN113487738B (en) 2022-07-05

Family

ID=77936209

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110707259.XA Active CN113487738B (en) 2021-06-24 2021-06-24 Building based on virtual knowledge migration and shielding area monomer extraction method thereof

Country Status (1)

Country Link
CN (1) CN113487738B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024139700A1 (en) * 2022-12-28 2024-07-04 腾讯科技(深圳)有限公司 Building identification method and apparatus, and device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150310274A1 (en) * 2014-04-25 2015-10-29 Xerox Corporation Method and system for automatically locating static occlusions
US10067509B1 (en) * 2017-03-10 2018-09-04 TuSimple System and method for occluding contour detection
CN109409286A (en) * 2018-10-25 2019-03-01 哈尔滨工程大学 Ship target detection method based on the enhancing training of pseudo- sample
US20190205616A1 (en) * 2017-12-29 2019-07-04 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for detecting face occlusion
CN110009657A (en) * 2019-04-01 2019-07-12 南京信息工程大学 A kind of video object dividing method based on the modulation of pyramid network
CN110472089A (en) * 2019-08-16 2019-11-19 重庆邮电大学 A kind of infrared and visible images search method generating network based on confrontation
US20200278681A1 (en) * 2019-02-28 2020-09-03 Zoox, Inc. Determining occupancy of occluded regions
US20200387698A1 (en) * 2018-07-10 2020-12-10 Tencent Technology (Shenzhen) Company Limited Hand key point recognition model training method, hand key point recognition method and device
CN112163634A (en) * 2020-10-14 2021-01-01 平安科技(深圳)有限公司 Example segmentation model sample screening method and device, computer equipment and medium
US20210027532A1 (en) * 2019-07-25 2021-01-28 General Electric Company Primitive-based 3d building modeling, sensor simulation, and estimation

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150310274A1 (en) * 2014-04-25 2015-10-29 Xerox Corporation Method and system for automatically locating static occlusions
US10067509B1 (en) * 2017-03-10 2018-09-04 TuSimple System and method for occluding contour detection
US20190205616A1 (en) * 2017-12-29 2019-07-04 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for detecting face occlusion
US20200387698A1 (en) * 2018-07-10 2020-12-10 Tencent Technology (Shenzhen) Company Limited Hand key point recognition model training method, hand key point recognition method and device
CN109409286A (en) * 2018-10-25 2019-03-01 哈尔滨工程大学 Ship target detection method based on the enhancing training of pseudo- sample
US20200278681A1 (en) * 2019-02-28 2020-09-03 Zoox, Inc. Determining occupancy of occluded regions
CN110009657A (en) * 2019-04-01 2019-07-12 南京信息工程大学 A kind of video object dividing method based on the modulation of pyramid network
US20210027532A1 (en) * 2019-07-25 2021-01-28 General Electric Company Primitive-based 3d building modeling, sensor simulation, and estimation
CN110472089A (en) * 2019-08-16 2019-11-19 重庆邮电大学 A kind of infrared and visible images search method generating network based on confrontation
CN112163634A (en) * 2020-10-14 2021-01-01 平安科技(深圳)有限公司 Example segmentation model sample screening method and device, computer equipment and medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LONGZHE QUAN等: ""An Instance Segmentation-Based Method to Obtain the Leaf Age and Plant Centre of Weeds in Complex Field Environments"", 《SENSORS》 *
NAN SU等: ""Shadow Detection and Removal for Occluded Object Information Recovery in Urban High-Resolution Panchromatic Satellite Images"", 《IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING》 *
胡剑秋等: "基于Mask R-CNN的行人分割", 《指挥控制与仿真》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024139700A1 (en) * 2022-12-28 2024-07-04 腾讯科技(深圳)有限公司 Building identification method and apparatus, and device

Also Published As

Publication number Publication date
CN113487738B (en) 2022-07-05

Similar Documents

Publication Publication Date Title
CN111209810B (en) Boundary frame segmentation supervision deep neural network architecture for accurately detecting pedestrians in real time through visible light and infrared images
CN110516620B (en) Target tracking method and device, storage medium and electronic equipment
CN109697435B (en) People flow monitoring method and device, storage medium and equipment
CN110188705A (en) A kind of remote road traffic sign detection recognition methods suitable for onboard system
CN107767400B (en) Remote sensing image sequence moving target detection method based on hierarchical significance analysis
WO2012139228A1 (en) Video-based detection of multiple object types under varying poses
CN111985367A (en) Pedestrian re-recognition feature extraction method based on multi-scale feature fusion
CN106203277A (en) Fixed lens real-time monitor video feature extracting method based on SIFT feature cluster
US20220315243A1 (en) Method for identification and recognition of aircraft take-off and landing runway based on pspnet network
CN114758362A (en) Clothing changing pedestrian re-identification method based on semantic perception attention and visual masking
CN103426179A (en) Target tracking method and system based on mean shift multi-feature fusion
CN110751077B (en) Optical remote sensing picture ship detection method based on component matching and distance constraint
CN112507845B (en) Pedestrian multi-target tracking method based on CenterNet and depth correlation matrix
CN111723747A (en) Lightweight high-efficiency target detection method applied to embedded platform
CN114966696A (en) Transformer-based cross-modal fusion target detection method
CN113487738B (en) Building based on virtual knowledge migration and shielding area monomer extraction method thereof
Sun et al. IRDCLNet: Instance segmentation of ship images based on interference reduction and dynamic contour learning in foggy scenes
Abujayyab et al. Integrating object-based and pixel-based segmentation for building footprint extraction from satellite images
CN117690009A (en) Small sample data amplification method suitable for underwater flexible movable target
Su et al. Which CAM is better for extracting geographic objects? A perspective from principles and experiments
Wang et al. Vehicle key information detection algorithm based on improved SSD
CN103426178A (en) Target tracking method and system based on mean shift in complex scene
Lei et al. Ship detection based on deep learning under complex lighting
Wang et al. The building area recognition in image based on faster-RCNN
Wang et al. Measuring driving behaviors from live video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant