CN115424155A - Illegal construction detection method, illegal construction detection device and computer storage medium - Google Patents

Illegal construction detection method, illegal construction detection device and computer storage medium Download PDF

Info

Publication number
CN115424155A
CN115424155A CN202211377018.4A CN202211377018A CN115424155A CN 115424155 A CN115424155 A CN 115424155A CN 202211377018 A CN202211377018 A CN 202211377018A CN 115424155 A CN115424155 A CN 115424155A
Authority
CN
China
Prior art keywords
image
aerial vehicle
unmanned aerial
illegal
vehicle image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211377018.4A
Other languages
Chinese (zh)
Other versions
CN115424155B (en
Inventor
任宇鹏
金恒
周宏宾
李乾坤
殷俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202211377018.4A priority Critical patent/CN115424155B/en
Publication of CN115424155A publication Critical patent/CN115424155A/en
Application granted granted Critical
Publication of CN115424155B publication Critical patent/CN115424155B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Remote Sensing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an illegal construction detection method, an illegal construction detection device and a computer storage medium, wherein the illegal construction detection method comprises the following steps: acquiring an unmanned aerial vehicle image acquired based on a target area; acquiring a corresponding orthographic map slice according to the unmanned aerial vehicle image; acquiring a reprojection image of the coordinate system of the orthographic map slice reprojection to the unmanned aerial vehicle image; based on the image difference information of the reprojected image and the unmanned aerial vehicle image, obtaining the change detection area results of the target area at the first moment and the second moment; and forming illegal building detection information according to the change detection area result, wherein the illegal building detection information comprises the position of the illegal building behavior, the time of the illegal building behavior and/or the type of the illegal building behavior. According to the illegal construction detection method, the whole process supervision of illegal construction behaviors in the target area can be realized through the change detection result of the orthographic map and the unmanned aerial vehicle image.

Description

Illegal construction detection method, illegal construction detection device and computer storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an illegal construction detection method, an illegal construction detection apparatus, and a computer storage medium.
Background
Illegal building supervision is one of important works of urban management all the time, the traditional patrol of illegal buildings in a manual mode is time-consuming and labor-consuming, the supervision of illegal buildings is not timely and effective enough, meanwhile, due to the limitation of patrol angles, illegal and embarkation and additional building behaviors on the top of the buildings cannot be timely found, and therefore follow-up dismantling and rebuilding work of illegal buildings needs to be carried out with a large amount of time and energy. How to carry out illegal construction management and control and remediation for a long time, in time and at low cost, the illegal post-treatment is changed into in-service or even pre-prevention, the illegal extension and construction adding behavior is effectively prevented, the efficiency of a supervision department is improved, and the problem to be solved at present is urgently needed.
Disclosure of Invention
The application provides a method and a device for detecting illegal construction and a computer storage medium.
One technical solution adopted by the present application is to provide an illegal construction detection method, including:
acquiring an unmanned aerial vehicle image acquired based on a target area;
acquiring a corresponding orthographic map slice according to the unmanned aerial vehicle image;
acquiring a re-projection image of the coordinate system of the unmanned aerial vehicle image re-projected by the orthographic map slice;
acquiring a change detection area result of the target area based on the image difference information of the re-projected image and the unmanned aerial vehicle image;
and forming illegal building detection information according to the change detection area result, wherein the illegal building detection information comprises illegal building behavior positions, illegal building behavior time and/or illegal building behavior types.
Wherein the obtaining of the change detection area result of the target area based on the image difference information of the reprojected image and the unmanned aerial vehicle image includes:
taking the re-projected image as a first input of a twin change detection network and the unmanned aerial vehicle image as a second input of the twin change detection network;
and comparing the remote sensing image characteristics of the reprojected image with the remote sensing image characteristics of the unmanned aerial vehicle image by the twin change detection network, and outputting a change detection area result according to the comparison result.
The twin change detection network comprises a result output module, a characteristic comparison network, a first twin network and a second twin network which share weight;
the first twin network is used for extracting the remote sensing image features of the reprojected image, the second twin network is used for extracting the remote sensing image features of the unmanned aerial vehicle image, the feature comparison network is used for comparing the remote sensing image features of the reprojected image with the remote sensing image features of the unmanned aerial vehicle image, and the result output module is fused with the comparison output of the feature comparison network and forms a change detection area result.
Wherein, result output module includes multilayer perceptron and upsampling layer, multilayer perceptron is used for unifying the output channel number of network is compared to the characteristic, the upsampling layer be used for with the output upsampling of network is compared to the characteristic arrives reprojection image and/or the preset size of unmanned aerial vehicle image is with the upsampling result input multilayer perceptron, multilayer perceptron still is used for the basis the categorised confidence of upsampling result output, wherein, categorised including detection area change and detection area do not change.
Wherein the change detection area result comprises a predicted change detection box;
the forming illegal detection information according to the change detection area result comprises:
detecting a frame based on a number of predicted changes in the change detection area result;
carrying out instance segmentation on the building violating the unmanned aerial vehicle image to obtain a plurality of building violating detection frames of the unmanned aerial vehicle image;
and calculating the overlapping rate of the plurality of predicted change detection frames and the plurality of default detection frames, removing the predicted change detection frames or default detection frames with the overlapping rate larger than a preset threshold value, and forming default detection information by the remaining predicted change detection frames and the default detection frames.
Wherein the obtaining of the reprojected image of the orthographic map slice reprojected to the coordinate system of the unmanned aerial vehicle image comprises:
acquiring a plurality of plane blocks of the orthographic map slice, wherein the plane blocks are orthographic map slice areas which are positioned on the same height plane in the orthographic map slice;
acquiring a homography matrix of each plane block and the unmanned aerial vehicle image;
and respectively re-projecting the plane blocks of the orthographic map slice onto a coordinate system of the unmanned aerial vehicle image according to the homography matrix of each plane block to form the re-projected image.
Wherein the obtaining of the plurality of planar segments of the orthographic map slice comprises:
acquiring a corresponding digital surface model slice according to the orthographic map slice;
clustering all model points in the digital surface model slice according to height information to obtain a plurality of model point groups;
and grouping a plurality of model points to form a plurality of masks, and processing the orthographic map slices by using the masks to obtain a plurality of plane blocks of the orthographic map slices.
Wherein the obtaining of the plurality of planar patches of the orthographic map slice comprises:
carrying out example segmentation on the ortho map slice to obtain a roof example area and a ground example area in the ortho map slice;
and dividing the orthographic map slice according to the roof example area and the ground example area to obtain a plurality of plane blocks of the orthographic map slice.
Wherein, according to the unmanned aerial vehicle image, acquire corresponding orthographic map section, include:
reading positioning information of the unmanned aerial vehicle image;
and cutting an orthographic map slice with the same image range as the unmanned aerial vehicle from the orthographic map according to the positioning information.
Wherein, according to the positioning information cuts out from the orthographic map with the same orthographic map section of unmanned aerial vehicle image scope, include:
acquiring a first image size of the unmanned aerial vehicle image;
determining the positioning range of the orthographic map slice in the orthographic map according to the positioning information;
determining a second image size of the orthographic map slice according to the first image size, wherein the second image size is larger than the second image size;
and cutting the ortho-map slice from the ortho-map according to the second image size and the positioning range.
Another technical solution adopted by the present application is to provide a device for detecting illegal construction, which includes a memory and a processor coupled to the memory;
wherein the memory is adapted to store program data and the processor is adapted to execute the program data to implement the method of violation detection as described above.
Another technical solution adopted by the present application is to provide a computer storage medium, where the computer storage medium is used to store program data, and the program data is used to implement the above-mentioned violation detection method when executed by a computer.
The beneficial effect of this application is: the method comprises the steps that an illegal building detection device obtains unmanned aerial vehicle images acquired based on a target area; acquiring a corresponding orthographic map slice according to the unmanned aerial vehicle image; acquiring a re-projection image of the coordinate system of the unmanned aerial vehicle image re-projected by the orthographic map slice; acquiring a change detection area result of the target area based on the image difference information of the re-projected image and the unmanned aerial vehicle image; and forming illegal building detection information according to the change detection area result, wherein the illegal building detection information comprises an illegal building behavior position, illegal building behavior time and/or illegal building behavior type. According to the illegal construction detection method, the whole process supervision of illegal construction behaviors in the target area can be realized through the change detection result of the orthographic map and the unmanned aerial vehicle image.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a violation detection method provided by the present application;
FIG. 2 is a schematic diagram of a general flow of the violation detection method provided in the present application;
fig. 3 is a schematic diagram of an embodiment of a drone image provided by the present application;
FIG. 4 is a schematic view of an embodiment of an orthographic map slice provided herein;
FIG. 5 is a schematic diagram of one embodiment of an orthographic map and a digital surface model provided herein;
FIG. 6 is a schematic diagram of an orthographic map ortho correction and projection error elimination provided herein;
FIG. 7 is a flowchart illustrating specific sub-steps of step S13 of the violation detection method shown in FIG. 1;
FIG. 8 is a schematic diagram illustrating an embodiment of an orthographic map reprojection result provided herein;
FIG. 9 is a schematic diagram illustrating another embodiment of an orthographic map reprojection result provided herein;
FIG. 10 is a schematic structural diagram of an embodiment of a twin change detection model provided herein;
FIG. 11 is a schematic diagram of one embodiment of a change detection result provided herein;
FIG. 12 is a schematic structural diagram of an embodiment of a violation detection device provided in the present application;
FIG. 13 is a schematic structural diagram of an embodiment of a computer storage medium provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The application provides an image patrols and examines based on unmanned aerial vehicle, comprehensive application degree of depth study registration, the case is cut apart and historical image change analysis breaks the construction supervision scheme, can realize the automatic extraction analysis of the building violating the regulations in the specific area, effectively investigate wide area regional act of violating the regulations with no dead angle, low-cost mode, illegal building discovery rate and quick processing efficiency accomplish the investigation of the end of investigation of the building violating the regulations, the in-service detection of the act of violating the regulations and the advance prevention of additional construction.
The deep learning example segmentation mainly detects the existing violation buildings, constructors and related construction equipment; the deep learning image registration mainly carries out registration of the unmanned aerial vehicle image and the orthographic map for multiple periods; the deep learning change analysis is mainly used for detecting the change area of the unmanned aerial vehicle image and the ortho map for multiple times.
Referring to fig. 1 and fig. 2, fig. 1 is a schematic flowchart of an embodiment of a default detection method provided in the present application, and fig. 2 is a schematic flowchart of a general flow of the default detection method provided in the present application.
The technical process of the method for detecting and monitoring the deep learning illegal buildings based on the unmanned aerial vehicle images specifically refers to the general flow diagram in fig. 2. The illegal construction detection method is mainly divided into three modules, namely a module 1 base map positioning and cutting transformation module, a module 2 unmanned aerial vehicle image/video stream instance segmentation module and a module 3 change detection and post-processing module. The following describes the work flows of the module 1, the module 2, and the module 3 respectively with reference to a flow diagram of an embodiment of the violation detection method described in fig. 1.
The illegal construction detection method is applied to an illegal construction detection device, wherein the illegal construction detection device can be a server or a system formed by the server and the illegal construction detection device in a mutual matching mode. Accordingly, each part, for example, each unit, sub-unit, module, sub-module, included in the illegal building detection device may be all disposed in the server, or may be disposed in the server and the illegal building detection device, respectively.
Further, the server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster composed of multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules, for example, software or software modules for providing distributed servers, or as a single software or software module, and is not limited herein. In some possible implementations, the violation detection method of the embodiments of the present application may be implemented by a processor calling a computer readable instruction stored in a memory.
Specifically, as shown in fig. 1, the method for detecting violation according to the embodiment of the present application specifically includes the following steps:
step S11: and acquiring an unmanned aerial vehicle image acquired based on the target area.
In this application embodiment, detection device breaks apart and patrols and examines the in-process at unmanned aerial vehicle, reads the image of unmanned aerial vehicle or unmanned aerial vehicle video stream throughout the year to acquire the unmanned aerial vehicle image in the image of unmanned aerial vehicle or unmanned aerial vehicle video stream throughout the year. The frequency of acquiring the images of the unmanned aerial vehicle for multiple times can be adjacent moments, such as adjacent dates, adjacent months, adjacent years and the like, or non-adjacent moments, such as multiple days, multiple months or multiple years between the first moment and the second moment.
Step S12: and acquiring a corresponding orthographic map slice according to the unmanned aerial vehicle image.
In the embodiment of the application, the default detection device acquires the corresponding ortho-map slice according to the unmanned aerial vehicle image.
Specifically, fig. 3 is a schematic diagram of an embodiment of a drone image provided by the present application, and fig. 4 is a schematic diagram of an embodiment of an orthographic map slice provided by the present application. Comparing fig. 3 and fig. 4, it can be seen that the content of the orthographic map slice is substantially the same as the content of the drone image, and the GPS information of both is identical. For example, as building a in fig. 3 and building b in fig. 4. Building a and building b are substantially the same building, and since the drone image is a non-orthographic image, the building is photographed at a different angle than the building in the orthographic map slice. By the method for detecting the building violation, the building b in fig. 4 can be re-projected to the position of the building a in fig. 3 in a re-projection mode.
The violation detection device reads GPS information in an unmanned aerial vehicle image exif (Exchangeable image file format), such as longitude and latitude of a target area, and cuts out an orthographic map slice corresponding to an unmanned aerial vehicle image range from the orthographic map according to the GPS information.
In other embodiment modes, the default detection device can expand a preset area on the basis of a map area determined by GPS information of an unmanned aerial vehicle image, so that under the condition of considering camera distortion, calculation errors and the like, it is ensured that an orthographic map slice can provide all pixel information matched with the unmanned aerial vehicle image, positioning is performed in the unmanned aerial vehicle image through the GPS information, base map corresponding area cutting is performed according to the width and the height of the unmanned aerial vehicle image, the cutting width and the height are slightly larger than the length and the width of the image, the positioning error of the unmanned aerial vehicle image is compounded, and the success rate of the subsequent orthographic map slice and unmanned aerial vehicle image registration is improved.
Specifically, the illegal building detection device acquires a first image size of the unmanned aerial vehicle image, and determines a second image size of the ortho-map slice according to the first image size, wherein the second image size is slightly larger than the second image size. And the default detection device determines the positioning range of the ortho-map slice in the ortho-map according to the positioning information, and then cuts out the ortho-map slice from the ortho-map according to the second image size and the positioning range.
In the following re-projection process, a digital surface model slice (DMS slice) may also be required, and the digital surface model slice and the orthographic map slice are obtained in the same manner, and are not described herein again. And the model points in the digital surface model slice correspond to the pixel points in the orthographic map slice one by one. Specifically, fig. 5 is a schematic diagram of an embodiment of an orthographic map and a digital surface model provided in the present application. Wherein, the area A is an orthographic map, the area B is a digital surface model, and the area C and the area D are respectively local detail displays of different sizes of the orthographic map.
Specifically, a Digital Slope Model (DSM) refers to a ground elevation Model that includes the heights of surface buildings, bridges, trees, and the like. Compared with a Digital Elevation Model (DEM), the DEM only contains Elevation information of terrain and does not contain other surface information, and the DSM further contains the Elevation of surface information except the ground on the basis of the DEM. In some fields with requirements on the height of buildings, great attention is paid.
Height information lacking in the orthographic map is made up through the height information provided by the digital surface model, and the registration effect and the re-projection effect of the orthographic map slice and the unmanned aerial vehicle image can be improved by means of the height information supplement of the digital surface model.
As shown in fig. 2, the violation detection device prestores an orthographic map and a DSM of a large area, or acquires the orthographic map and the DSM of the large area from the cloud server. The process of making the orthographic map and the DSM may be: firstly, an image or a video stream of an unmanned aerial vehicle in a research area is obtained to construct an initial base Map, and a True ortho image (TDOM) and a Digital Surface Model (DSM) in the research area are obtained, wherein the TDOM is used as a comparison standard or a base for the unmanned aerial vehicle to fly at all times, and the DSM provides elevation information for a subsequent base Map. Specifically, a large-range base image and DSM of the unmanned aerial vehicle in a research area are obtained through technologies such as image splicing, multi-angle three-dimensional reconstruction and the like, and Smart3D, pix4D, photoscan and other commercial unmanned aerial vehicle oblique photography modeling software or other open-source frames can be adopted. After acquiring the large-range image base map and DSM of the unmanned aerial vehicle in the research area, storing the images in a database for later use.
The cut ortho-map slice needs to be registered and corrected with the unmanned aerial vehicle image. This is mainly due to the fact that in the process of making the orthographic maps, projection errors are eliminated according to the adjacent maps and the DSM, the overall view angle is a vertical view angle, and buildings and terrain have no geometric deformation. Specifically, please refer to fig. 6, fig. 6 is a schematic diagram of the orthographic map correction and the projection error elimination provided in the present application. The left diagram E in fig. 6 is a map image before the ortho-correction and the projection error removal, and the right diagram F in fig. 6 is a map image after the ortho-correction and the projection error removal, that is, an ortho-map.
Step S13: and acquiring a re-projection image of the coordinate system of the orthographic map slice re-projected to the unmanned aerial vehicle image.
In an embodiment of the present application, the violation detection device calculates a first homography matrix between the ortho-map slice and the drone image. Then, the illegal building detection device re-projects the orthographic map slices onto a coordinate system of the unmanned aerial vehicle image according to the homography matrix to obtain a re-projected image.
Since the temporal images or video key frames acquired by the drone over the course of the tour are not strictly orthographic (as shown in fig. 3), corresponding registration and correction is performed prior to input change detection. However, because the shooting distance of the unmanned aerial vehicle is short, the shot objects (such as buildings, the ground and the like) are not in the same height plane, and therefore the mapping relation of points of different planes between different images cannot be described by using a single homography matrix.
Therefore, the method for re-projecting the shot map slice in the partitioned areas is provided, and the problem that the mapping relation cannot be accurately described by a single homography matrix is solved. Referring specifically to fig. 7, fig. 7 is a flowchart illustrating specific sub-steps of step S13 of the violation detection method shown in fig. 1.
Specifically, as shown in fig. 7, the method for detecting violation according to the embodiment of the present application specifically includes the following steps:
step S131: and acquiring a plurality of plane blocks of the orthographic map slice, wherein the plane blocks are orthographic map slice areas of the same height plane in the orthographic map slice.
In the embodiment of the application, the illegal building detection device may obtain a plurality of plane blocks of the orthographic map slices by a monitoring method of roof instance segmentation and an unsupervised method of height clustering according to DSM, where the plane blocks are orthographic map slice regions of the orthographic map slices in the same height plane, and the same height plane may be the same height plane or an approximate plane of similar height.
Two ways of obtaining plane partitions are described below:
firstly, acquiring a corresponding digital surface model slice by an illegal building detection device according to an orthographic map slice; clustering all model points in the digital surface model slice according to the height information to obtain a plurality of model point groups; and grouping a plurality of model points to form a plurality of masks, and processing the orthographic map slices by using the masks to obtain a plurality of plane blocks of the orthographic map slices.
Specifically, the violation detection device uses a preset clustering algorithm to cluster all model points in the DSM slice according to the height information of the model points, so that the height in the DSM slice is clustered into a plurality of model point groups. The model points in each model point grouping have the same or similar heights, i.e., the height distance is within a preset distance range, and thus, each model point grouping can be approximately equivalent to a plane.
The violation detection device converts the formed model points in each model point group into a mask in sequence from low to high according to the average heights of the model points in the group, namely, each model point group forms a mask.
Further, the illegal building detection device sequentially acts a mask on the original orthographic map slices, and the orthographic map slice areas obtained after the mask are similar to the same plane. Specifically, because the DSM slices correspond to the pixel points in the ortho map slices one by one, each model point group can be similar to the same plane in the DSM slices, and the ortho map slice area under the mask can be similar to the same plane through the corresponding relation of the pixel points.
Secondly, carrying out instance segmentation on the orthographic map slices by using an illegal building detection device to obtain a roof instance region and a ground instance region in the orthographic map slices; and dividing the ortho map slice according to the roof example area and the ground example area to obtain a plurality of plane blocks of the ortho map slice.
Continuing with block2 in fig. 2, the block is primarily concerned with illegal building and roof instance partitioning and accompanying illicit material library construction and semi-automated labeling.
The module is mainly used for carrying out example segmentation on violation buildings and roofs of images or video key frames acquired by the unmanned aerial vehicle through previous inspection, and respectively acquiring detection frames and mask information of the violation buildings and mask information of roof objects. The detection and segmentation result of the illegal buildings is used as a general investigation result of the storage illegal buildings in the research area, and is fused with the change detection result in the post-processing stage of the module 3. And the example segmentation result of the research area roof is mainly used as the input of the module 1, the homographic transformation of the subarea is carried out on the shot map slice by subareas, the transformed result is obtained, and the final result is obtained after the image hole part is completed by adopting the unmanned aerial vehicle image and is used as one of the inputs of the next module change detection.
Step S132: and acquiring a homography matrix of each plane block and the unmanned aerial vehicle image.
In this application embodiment, the detection device of violating the construction extracts the feature points of the ortho-map slice after the mask and the original unmanned aerial vehicle image respectively to carry out feature point matching, and solve the projection relation according to the matched feature points, wherein the projection relation can be the homography matrix of the ortho-map slice after the mask processing and the unmanned aerial vehicle image.
Specifically, the illegal building detection device extracts a plurality of first feature points of the unmanned aerial vehicle image and a plurality of second feature points of the orthographic map slice after mask processing; and matching the characteristic points of the plurality of first characteristic points and the plurality of second characteristic points, and calculating a homography matrix of the orthomap slice after the mask processing and the unmanned aerial vehicle image according to a matching result.
Further, because the orthographic map slice area obtained behind each mask can be similar to the same plane, the illicit detection device can also calculate the homography matrix of the orthographic map slice area of each mask and the unmanned aerial vehicle image, and therefore the multiple homography matrices are used, and the reprojection from the orthographic map slice to the non-orthographic unmanned aerial vehicle image is executed in blocks.
In the embodiment of the present application, the corresponding orthographic map slice area of each mask in the orthographic map slice can be similar to the same plane, and the orthographic map slice area is therefore used as the minimum unit for the re-projection.
In the embodiment of the application, the violation detection device calculates the mapping relation between the ortho-map slice region and the unmanned aerial vehicle image, namely the homography matrix, by using a plurality of successfully matched feature points according to the feature point matching result between each ortho-map slice region and the unmanned aerial vehicle image.
Among them, a homographic matrix (homographic matrix) is equivalent to a matrix used in the perspective transformation. The perspective transformation describes the mapping relationship between two planes. It is understood that the homography is called because the relationship between two planes is deterministic and the transformation can only be represented by a unique matrix, hence the homography.
Step S133: and respectively re-projecting the plane blocks of the orthographic map slice onto a coordinate system of the unmanned aerial vehicle image according to the homography matrix of each plane block to form a re-projected image.
In this embodiment of the application, the violation detection device re-projects the masked orthographic map slice or the original orthographic map slice to the coordinate system of the unmanned aerial vehicle image according to the homography matrix calculated in step S132. Since the homography matrices of the plurality of ortho-map slice regions and the unmanned aerial vehicle image are calculated in step S132, the violation detection apparatus may re-project the masked ortho-map slices into the coordinate system of the unmanned aerial vehicle image in blocks by using the plurality of homography matrices. And (4) finishing mask processing of all the ortho-map slice areas by the illegal building detection device, and superposing multiple times of re-projection to obtain a result of matching the original ortho-map slices to the unmanned aerial vehicle image, namely obtaining a re-projected matched image. Referring to fig. 8, fig. 8 is a schematic diagram of an embodiment of an orthographic map reprojection result provided in the present application.
Further, the image shown in fig. 3 is a non-orthometric image obtained by the inspection of the unmanned aerial vehicle, and as the flying height of the unmanned aerial vehicle is low, the perspective relation is obvious, the side elevation of the building can be seen; the heights of the roofs of the buildings are different and are not positioned on the same plane, and the matching with an orthographic map cannot be realized by using a single homography matrix. Fig. 8 shows the result of matching an ortho-map slice to the drone image shown in fig. 3 according to the violation detection method described in the present application. For example, the building c in fig. 8 is a re-projection result of the orthographic map slice matched on the drone image, and as can be seen from fig. 8, the defaulting detection device displays the shape of the building c by attaching the orthographic building of the orthographic map slice to the same non-orthographic building on the drone image.
In addition, since the orthographic map does not include information of the building side elevation, there is a pixel missing in the corresponding area projected through the rear side elevation, that is, a black connected area as shown in fig. 8. After the re-projection, the roof of the building on the map is aligned with the roof of the unmanned aerial vehicle image, and the requirement of subsequent illegal building detection based on image comparison is met. For the problem of missing pixels at the position of the rear vertical face of the projection, the pixels at the corresponding position of the unmanned aerial vehicle image can be used for filling, and the noise of the regions can be effectively filtered by an image comparison algorithm based on deep learning. Referring to fig. 9, fig. 9 is a schematic diagram of another embodiment of an orthographic map reprojection result provided in the present application. Comparing the area d in fig. 9 with the black connected area shown in fig. 8, the pixels at the side elevation positions in the unmanned aerial vehicle image can be filled into the black connected area in fig. 8 by the pixel filling method, so that the display effect of the area d is obtained.
Continuing to refer to a module 3 in fig. 2, the illegal building detection device respectively obtains the images or video key frames obtained by the unmanned aerial vehicle through the past inspection and the ortho-map slices after the homographic transformation, and the ortho-map slices are used as the input of the change detection module to analyze the change conditions of the areas corresponding to the two moments. Please refer to the following steps:
step S14: and acquiring a change detection area result of the target area based on the image difference information of the reprojected image and the unmanned aerial vehicle image.
In this application embodiment, the violation detection device may obtain image difference information, such as pixel value difference, pixel value distribution, and the like, between the reprojected image and the unmanned aerial vehicle image, so as to compare and generate a change detection area result of the target area, i.e., analyze a change detection area of the target area.
The present application provides a change detection model based on a deep neural network, and please refer to fig. 10 for a specific structure, where fig. 10 is a schematic structural diagram of an embodiment of a twin change detection model provided in the present application.
As shown in fig. 10, the twin change detection model of the present application adopts a twin network design concept, performs remote sensing image feature extraction at a first time T1 (i.e., an orthographic map slice) and a second time T2 (i.e., an unmanned aerial vehicle image) through two twin networks sharing a weight, and stacks four transform modules to form a main structure of the twin network. The first twin network and the second twin network respectively comprise four transform modules (transformation blocks), a Difference transform structure is designed, and the Difference transform structure is used for processing transform outputs of different stages of the twin feature extraction network. Wherein, the characteristic comparison network comprises four Difference transform structures.
The Transformer structure comprises a plurality of layers, each layer comprises a plurality of head modules, each head module is a self-attention module, and the specific structure is as follows:
Figure 146272DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 890237DEST_PATH_IMAGE002
is the dimension of K. The above-described attribute may be described as mapping a set of query and key-value pairs to an output, the output being calculated as a weighted sum of values, where the weight assigned to each value is calculated from the similarity function of the query and the corresponding key. This form of attribute is called Scaled Dot-Product attribute.
Multiple self-attention module outputs are processed with concatemate to form a multi-head attention Module (MHA), and position information embedding is carried out through a 3-by-3 convolution:
Figure 596025DEST_PATH_IMAGE003
Figure 790246DEST_PATH_IMAGE004
Figure 817370DEST_PATH_IMAGE005
for the output of each layer is a function of,
Figure 630605DEST_PATH_IMAGE006
represents the gaussian error linear cell activation function,
Figure 925320DEST_PATH_IMAGE007
the number of layers in the transform module,
Figure 556022DEST_PATH_IMAGE008
in order to be a multi-head attention module,
Figure 568977DEST_PATH_IMAGE009
is a full connection layer.
The head number, the layer number and the channel number corresponding to different transform modules in the whole twin feature extraction network are respectively as follows: block1-1head-3layer-64channel, block2-2head-6layer-128channel, block3-5head-40layer-320channel, and block4-8head-3layer-512channel.
The main Difference between the Difference Transformer structure and other Transformer modules is the design of Query, key and Value, where Query uses the result of Transformer feature extraction in stage T1, and Key and Value use the result of Transformer feature extraction in stage T2:
Figure 737966DEST_PATH_IMAGE010
Figure 824871DEST_PATH_IMAGE011
wherein the content of the first and second substances,
Figure 564157DEST_PATH_IMAGE012
Figure 861146DEST_PATH_IMAGE013
respectively representing the outputs of the transformers of the twin characteristic extraction network at the T1 moment and the T2 moment in different stages,
Figure 344080DEST_PATH_IMAGE014
the output of the Difference Transformer in different stages.
The outputs of the four Difference transform modules are fused, and a prediction result is formed by a Head module (prediction module), namely a result output module, wherein the Head module is composed of an MLP (multi layer Perceptron) and an upper sampling layer. The method comprises the steps of firstly unifying the number of output channels of four Difference transform modules through an MLP layer, then up-sampling to one fourth of an original image through an upsample layer, finally outputting concatenate of the four Difference transform modules through a final MLP layer to output classification (change/unchanged) confidence, and then up-sampling to an original resolution and selecting a threshold value to obtain a final change detection result.
Step S15: and forming illegal building detection information according to the result of the change detection area, wherein the illegal building detection information comprises illegal building behavior positions, illegal building behavior time and/or illegal building behavior types.
In the embodiment of the application, the illegal building detection device outputs the detection result of the change area through the change detection model, and mainly detects newly-added illegal buildings, construction materials and construction equipment, such as illegal buildings additionally built and expanded, materials and facilities such as sand, scaffolds, concrete mixers and the like during construction. Specifically, please refer to fig. 11, fig. 11 is a schematic diagram of an embodiment of a variation detection result provided in the present application. The area e, the area f, and the area h in fig. 11 are the violations in the change detection result and the locations thereof, respectively.
The illegal building detection device carries out IoU (intersection-to-parallel ratio) calculation on the overlapping area by fusing the example segmentation results of the illegal buildings, removes the area with large overlapping rate, obtains the final detection result of the illegal buildings and forms alarm information. The illegal building detection information comprises illegal building behavior positions, illegal building behavior time, illegal building behavior types and the like.
As shown in fig. 2, on one hand, the violation detection device obtains the change detection results of the ortho-map slice and the unmanned aerial vehicle map through the twin change detection model, and presents the change detection results as a prediction change detection frame; on the one hand, the illegal building detection device cuts apart the roof and the illegal building case on the unmanned aerial vehicle image through the target detection model trained in advance, acquires the illegal building on the unmanned aerial vehicle image, and the detection frame is shown to be illegal building. Furthermore, the illegal construction detection device fuses the prediction change detection frame and the illegal construction detection frame, so that the result of the change detection area is processed, and the final illegal construction detection information is obtained.
Specifically, the default detection device calculates an overlap ratio of the plurality of predicted change detection frames and the plurality of default detection frames, removes the predicted change detection frames or default detection frames having an overlap ratio greater than a preset threshold, and forms default detection information from the remaining predicted change detection frames and default detection frames.
Through the steps S11 to S15, the illegal construction detection method covers the whole stages of illegal construction behaviors before implementation (construction materials and equipment), during implementation (scaffolds and semi-finished buildings) and after implementation (inventory illegal construction), and can carry out illegal construction supervision in a large area and high efficiency. The illegal building detection device is used for obtaining a storage illegal building through unmanned aerial vehicle image instance segmentation, comparing and analyzing the unmanned aerial vehicle images and the base map images spliced in advance for multiple times to obtain a change area, wherein the change area contains an ongoing or upcoming illegal building target or object (such as construction equipment, building materials, semi-finished buildings and the like).
In the embodiment of the application, the illegal construction detection device acquires an unmanned aerial vehicle image acquired based on a target area; acquiring a corresponding orthographic map slice according to the unmanned aerial vehicle image; acquiring a re-projection image of the coordinate system of the unmanned aerial vehicle image re-projected by the orthographic map slice; acquiring a change detection area result of the target area based on the image difference information of the reprojected image and the unmanned aerial vehicle image; and forming illegal building detection information according to the change detection area result, wherein the illegal building detection information comprises illegal building behavior positions, illegal building behavior time and/or illegal building behavior types. According to the illegal construction detection method, the whole process supervision of illegal construction behaviors in the target area can be realized through the change detection result of the orthographic map and the unmanned aerial vehicle image. The illegal building detection method can further achieve full-flow supervision of illegal building behaviors of the target area through change detection results of the unmanned aerial vehicle images for multiple periods.
The illegal construction detection method of the embodiment of the application is combined with illegal construction instance segmentation and change detection results of multiple images, so that the whole flow supervision of illegal construction behaviors is realized, and the illegal construction supervision can be automatically, widely and efficiently carried out before (construction materials and equipment), during (scaffold and semi-finished building) and after (storage quantity illegal construction) implementation; by means of unmanned aerial vehicle image instance segmentation and change detection analysis based on deep learning, meanwhile, illegal building materials can be accumulated, a massive illegal building sample library is constructed, and updating and upgrading of a deep learning algorithm are achieved through model self-iteration.
The above embodiments are only one of the common cases of the present application and do not limit the technical scope of the present application, so that any minor modifications, equivalent changes or modifications made to the above contents according to the essence of the present application still fall within the technical scope of the present application.
Referring to fig. 12, fig. 12 is a schematic structural diagram of an embodiment of a violation detection apparatus provided in the present application. The violation detection apparatus 500 of the embodiment of the present application includes a processor 51, a memory 52, an input-output device 53, and a bus 54.
The processor 51, the memory 52, and the input/output device 53 are respectively connected to the bus 54, the memory 52 stores program data, and the processor 51 is configured to execute the program data to implement the violation detection method according to the above embodiment.
In the embodiment of the present application, the processor 51 may also be referred to as a CPU (Central Processing Unit). The processor 51 may be an integrated circuit chip having signal processing capabilities. The processor 51 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor 51 may be any conventional processor or the like.
Please refer to fig. 13, where fig. 13 is a schematic structural diagram of an embodiment of a computer storage medium provided in the present application, the computer storage medium 600 stores program data 61, and the program data 61 is used to implement the violation detection method of the above embodiment when being executed by a processor.
Embodiments of the present application may be implemented in software functional units and may be stored in a computer readable storage medium when sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, which is defined by the claims and the accompanying drawings, and the equivalents and equivalent structures and equivalent processes used in the present application and the accompanying drawings are also directly or indirectly applicable to other related technical fields and are all included in the scope of the present application.

Claims (12)

1. An illegal establishment detection method, characterized in that the illegal establishment detection method comprises:
acquiring an unmanned aerial vehicle image acquired based on a target area;
acquiring a corresponding orthographic map slice according to the unmanned aerial vehicle image;
acquiring a re-projection image of the coordinate system of the unmanned aerial vehicle image re-projected by the orthographic map slice;
acquiring a change detection area result of the target area based on the image difference information of the reprojected image and the unmanned aerial vehicle image;
and forming illegal building detection information according to the change detection area result, wherein the illegal building detection information comprises an illegal building behavior position, illegal building behavior time and/or illegal building behavior type.
2. The violation detection method of claim 1,
the obtaining of the change detection area result of the target area based on the image difference information of the reprojected image and the unmanned aerial vehicle image includes:
taking the re-projected image as a first input of a twin change detection network and the unmanned aerial vehicle image as a second input of the twin change detection network;
and comparing the remote sensing image characteristics of the reprojected image with the remote sensing image characteristics of the unmanned aerial vehicle image by the twin change detection network, and outputting a change detection area result according to the comparison result.
3. The violation detection method of claim 2,
the twin change detection network comprises a result output module, a characteristic comparison network, a first twin network and a second twin network which share weight;
the first twin network is used for extracting the remote sensing image features of the reprojected image, the second twin network is used for extracting the remote sensing image features of the unmanned aerial vehicle image, the feature comparison network is used for comparing the remote sensing image features of the reprojected image with the remote sensing image features of the unmanned aerial vehicle image, and the result output module is fused with the comparison output of the feature comparison network and forms a change detection area result.
4. The violation detection method according to claim 3,
the result output module includes multilayer perceptron and upsampling layer, multilayer perceptron is used for unifying the output channel number of network is compared to the characteristic, the upsampling layer be used for with the output upsampling of network is compared to the characteristic arrives reprojection image and/or the preset size of unmanned aerial vehicle image is with the upsampling result input multilayer perceptron, multilayer perceptron still is used for the basis the categorised confidence of upsampling result output, wherein, categorised including detection area change and detection area do not change.
5. The violation detection method according to claim 1,
the change detection area result comprises a predicted change detection box;
the forming illegal detection information according to the change detection area result comprises:
detecting a frame based on a number of predicted changes in the change detection area result;
carrying out instance segmentation on the building violating the unmanned aerial vehicle image to obtain a plurality of building violating detection frames of the unmanned aerial vehicle image;
and calculating the overlapping rate of the plurality of predicted change detection frames and the plurality of default detection frames, removing the predicted change detection frames or default detection frames with the overlapping rate larger than a preset threshold value, and forming default detection information by the remaining predicted change detection frames and the default detection frames.
6. The violation detection method of claim 1,
the obtaining of the reprojected image of the orthographic map slice reprojected to the coordinate system of the unmanned aerial vehicle image comprises:
acquiring a plurality of plane blocks of the orthographic map slice, wherein the plane blocks are orthographic map slice areas which are positioned on the same height plane in the orthographic map slice;
acquiring a homography matrix of each plane block and the unmanned aerial vehicle image;
and respectively re-projecting the plane blocks of the orthographic map slice onto a coordinate system of the unmanned aerial vehicle image according to the homography matrix of each plane block to form the re-projected image.
7. The violation detection method according to claim 6,
the obtaining of the plurality of planar patches of the orthographic map slice includes:
acquiring a corresponding digital surface model slice according to the orthographic map slice;
clustering all model points in the digital surface model slice according to height information to obtain a plurality of model point groups;
and grouping a plurality of model points to form a plurality of masks, and processing the orthographic map slices by using the masks to obtain a plurality of plane blocks of the orthographic map slices.
8. The violation detection method of claim 6,
the obtaining of the plurality of planar patches of the orthographic map slice includes:
carrying out example segmentation on the orthographic map slice to obtain a roof example area and a ground example area in the orthographic map slice;
and dividing the orthographic map slice according to the roof example area and the ground example area to obtain a plurality of plane blocks of the orthographic map slice.
9. The violation detection method according to claim 1,
according to unmanned aerial vehicle image acquires corresponding orthographic map section, includes:
reading positioning information of the unmanned aerial vehicle image;
and cutting out an orthographic map slice with the same range as the unmanned aerial vehicle image from the orthographic map according to the positioning information.
10. The violation detection method according to claim 9,
according to positioning information cuts out from the orthographic map with the same orthographic map section of unmanned aerial vehicle image scope, include:
acquiring a first image size of the unmanned aerial vehicle image;
determining the positioning range of the ortho-map slice in the ortho-map according to the positioning information;
determining a second image size of the orthographic map slice according to the first image size, wherein the second image size is larger than the second image size;
and cutting out the orthographic map slice from the orthographic map according to the second image size and the positioning range.
11. An apparatus for violation detection, comprising a memory and a processor coupled to the memory;
wherein the memory is adapted to store program data and the processor is adapted to execute the program data to implement the method of violation detection as claimed in any of claims 1-10.
12. A computer storage medium for storing program data for implementing a method of violation detection as claimed in any one of claims 1 to 10 when executed by a computer.
CN202211377018.4A 2022-11-04 2022-11-04 Illegal construction detection method, illegal construction detection device and computer storage medium Active CN115424155B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211377018.4A CN115424155B (en) 2022-11-04 2022-11-04 Illegal construction detection method, illegal construction detection device and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211377018.4A CN115424155B (en) 2022-11-04 2022-11-04 Illegal construction detection method, illegal construction detection device and computer storage medium

Publications (2)

Publication Number Publication Date
CN115424155A true CN115424155A (en) 2022-12-02
CN115424155B CN115424155B (en) 2023-03-17

Family

ID=84207696

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211377018.4A Active CN115424155B (en) 2022-11-04 2022-11-04 Illegal construction detection method, illegal construction detection device and computer storage medium

Country Status (1)

Country Link
CN (1) CN115424155B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115620150A (en) * 2022-12-05 2023-01-17 海豚乐智科技(成都)有限责任公司 Multi-modal image ground building identification method and device based on twin transform
CN116341875A (en) * 2023-04-25 2023-06-27 盐城市建设工程质量检测中心有限公司 Engineering detection system and method applied to building construction site
CN117648596A (en) * 2023-11-28 2024-03-05 河北建工集团有限责任公司 Digital twin and intelligent sensor fusion method and system for building construction

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016099316A (en) * 2014-11-26 2016-05-30 アジア航測株式会社 Feature change discrimination method, feature change discrimination device, and feature change discrimination program
CN105956058A (en) * 2016-04-27 2016-09-21 东南大学 Method for quickly discovering changed land by adopting unmanned aerial vehicle remote sensing images
CN109934166A (en) * 2019-03-12 2019-06-25 中山大学 Unmanned plane image change detection method based on semantic segmentation and twin neural network
CN113313006A (en) * 2021-05-25 2021-08-27 哈工智慧(武汉)科技有限公司 Urban illegal construction supervision method and system based on unmanned aerial vehicle and storage medium
WO2022213673A1 (en) * 2021-04-06 2022-10-13 中国矿业大学 Method for extracting three-dimensional surface deformation by combining unmanned aerial vehicle doms and satellite-borne sar images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016099316A (en) * 2014-11-26 2016-05-30 アジア航測株式会社 Feature change discrimination method, feature change discrimination device, and feature change discrimination program
CN105956058A (en) * 2016-04-27 2016-09-21 东南大学 Method for quickly discovering changed land by adopting unmanned aerial vehicle remote sensing images
CN109934166A (en) * 2019-03-12 2019-06-25 中山大学 Unmanned plane image change detection method based on semantic segmentation and twin neural network
WO2022213673A1 (en) * 2021-04-06 2022-10-13 中国矿业大学 Method for extracting three-dimensional surface deformation by combining unmanned aerial vehicle doms and satellite-borne sar images
CN113313006A (en) * 2021-05-25 2021-08-27 哈工智慧(武汉)科技有限公司 Urban illegal construction supervision method and system based on unmanned aerial vehicle and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ZHOU HONGBIN等: "MASNET: IMPROVE PERFORMANCE OF SIAMESE NETWORKS WITH MUTUAL-ATTENTION FOR REMOTE SENSING CHANGE DETECTION TASKS", 《ARXIV》 *
刘仁钊等: "《高等职业教育测绘地理信息类十三五规划教材 无人机倾斜摄影测绘技术》", 28 February 2021 *
徐克虎等: "《智能计算方法及其应用》", 31 July 2019 *
杨永崇: "《现代土地调查技术》", 30 September 2015 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115620150A (en) * 2022-12-05 2023-01-17 海豚乐智科技(成都)有限责任公司 Multi-modal image ground building identification method and device based on twin transform
CN115620150B (en) * 2022-12-05 2023-08-04 海豚乐智科技(成都)有限责任公司 Multi-mode image ground building identification method and device based on twin transformers
CN116341875A (en) * 2023-04-25 2023-06-27 盐城市建设工程质量检测中心有限公司 Engineering detection system and method applied to building construction site
CN116341875B (en) * 2023-04-25 2023-11-21 盐城市建设工程质量检测中心有限公司 Engineering detection system and method applied to building construction site
CN117648596A (en) * 2023-11-28 2024-03-05 河北建工集团有限责任公司 Digital twin and intelligent sensor fusion method and system for building construction
CN117648596B (en) * 2023-11-28 2024-04-30 河北建工集团有限责任公司 Digital twin and intelligent sensor fusion method and system for building construction

Also Published As

Publication number Publication date
CN115424155B (en) 2023-03-17

Similar Documents

Publication Publication Date Title
CN115424155B (en) Illegal construction detection method, illegal construction detection device and computer storage medium
US10970543B2 (en) Distributed and self-validating computer vision for dense object detection in digital images
CN107835997B (en) Vegetation management for powerline corridor monitoring using computer vision
US10346687B2 (en) Condition detection using image processing
CN108776772B (en) Cross-time building change detection modeling method, detection device, method and storage medium
US8917934B2 (en) Multi-cue object detection and analysis
US8744177B2 (en) Image processing method and medium to extract a building region from an image
CN112883948B (en) Semantic segmentation and edge detection model building and guardrail abnormity monitoring method
EP3543910A1 (en) Cloud detection in aerial imagery
CN112633661A (en) BIM-based emergency dispatching command method, system, computer equipment and readable medium
CN112084892B (en) Road abnormal event detection management device and method thereof
Sheng et al. Surveilling surveillance: Estimating the prevalence of surveillance cameras with street view data
US20210375033A1 (en) Techniques for creating, organizing, integrating, and using georeferenced data structures for civil infrastructure asset management
CN113158954B (en) Automatic detection method for zebra crossing region based on AI technology in traffic offsite
CN115439672B (en) Image matching method, illicit detection method, terminal device, and storage medium
CN115249345A (en) Traffic jam detection method based on oblique photography three-dimensional live-action map
CN115797668A (en) Image matching method, illicit detection method, terminal device, and storage medium
CN117392634B (en) Lane line acquisition method and device, storage medium and electronic device
Ahmed et al. Unmanned aerial multi-object dynamic frame detection and skipping using deep learning on the internet of drones
CN115731477A (en) Image recognition method, illicit detection method, terminal device, and storage medium
Böge et al. Localization and grading of building roof damages in high-resolution aerial images
CN115170860A (en) Image classification recognition method, recognition device and storage medium
Anuvab et al. PlateSegFL: A Privacy-Preserving License Plate Detection Using Federated Segmentation Learning
CN117132969A (en) Recognition method, device, computer and storage medium based on multinational license plate
CN114140707A (en) Power grid fault inspection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant