CN115439672B - Image matching method, illicit detection method, terminal device, and storage medium - Google Patents

Image matching method, illicit detection method, terminal device, and storage medium Download PDF

Info

Publication number
CN115439672B
CN115439672B CN202211377096.4A CN202211377096A CN115439672B CN 115439672 B CN115439672 B CN 115439672B CN 202211377096 A CN202211377096 A CN 202211377096A CN 115439672 B CN115439672 B CN 115439672B
Authority
CN
China
Prior art keywords
aerial vehicle
unmanned aerial
image
slice
vehicle image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211377096.4A
Other languages
Chinese (zh)
Other versions
CN115439672A (en
Inventor
周宏宾
金恒
任宇鹏
李乾坤
殷俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202211377096.4A priority Critical patent/CN115439672B/en
Publication of CN115439672A publication Critical patent/CN115439672A/en
Application granted granted Critical
Publication of CN115439672B publication Critical patent/CN115439672B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image matching method, an illegal construction detection method, a terminal device and a computer storage medium, wherein the image matching method comprises the following steps: clustering all model points in the digital surface model slice according to height information to obtain a plurality of model point groups; grouping a plurality of model points to form a plurality of masks, and processing the shot map slices by using the masks; acquiring a projection relation between the orthographic map slice subjected to mask processing and the unmanned aerial vehicle image; and re-projecting the orthographic map slice subjected to the mask processing into a coordinate system of the unmanned aerial vehicle image according to the projection relation, and overlapping the orthographic map slice subjected to the projection with the unmanned aerial vehicle image to form a re-projected matched image. According to the image matching method, the masks formed by clustering the digital surface model slices act on the orthographic map slices, the fact that the blocks of the orthographic map slices under each mask are approximately in the same plane is guaranteed, and the reprojection from the orthographic map slices to the unmanned aerial vehicle image is achieved.

Description

Image matching method, illicit detection method, terminal device, and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image matching method, an illegal construction detection method, a terminal device, and a computer storage medium.
Background
Image comparison, also called image change detection, requires that the two images be registered. The general procedure for registering two images is: firstly, respectively extracting characteristic points from two images, then matching two groups of characteristic points, then adopting homography matrix between the two images according to matched characteristic point technology, and finally re-projecting one image into the coordinate system of the other image according to the homography matrix. When a homography matrix is used for describing the relation between two images, certain prerequisites must be met, the two images must be images of the same plane, or the camera pose difference of the two images only comprises rotation. However, in the matching of the orthographic base map and the unmanned aerial vehicle image, both conditions are not satisfied: roof and ground in the images are often not on the same plane, and the camera poses of the two images translate.
Disclosure of Invention
The application provides an image matching method, an illegal construction detection method, a terminal device and a computer storage medium.
One technical solution adopted by the present application is to provide an image matching method, including:
acquiring unmanned aerial vehicle images, and acquiring corresponding orthographic map slices and digital surface model slices based on the unmanned aerial vehicle images;
clustering all model points in the digital surface model slice according to height information to obtain a plurality of model point groups;
grouping a plurality of model points to form a plurality of masks, and processing the orthographic map slices by using the masks;
acquiring a projection relation between the orthographic map slice subjected to mask processing and the unmanned aerial vehicle image;
and re-projecting the orthographic map slice after the mask processing to a coordinate system of the unmanned aerial vehicle image according to the projection relation, and overlapping the orthographic map slice after projection and the unmanned aerial vehicle image to form a re-projected matched image.
And the model points in the digital surface model slice correspond to the pixel points in the orthographic map slice one by one.
Wherein the projection relationship comprises a homography matrix;
the obtaining of the projection relationship between the orthographic map slice and the unmanned aerial vehicle image after mask processing comprises:
extracting a plurality of first characteristic points of the unmanned aerial vehicle image and a plurality of second characteristic points of the orthographic map slice after mask processing;
and matching the characteristic points of the plurality of first characteristic points and the plurality of second characteristic points, and calculating the homography matrix of the orthographic map slice and the unmanned aerial vehicle image after mask processing according to the matching result.
Wherein, the obtaining of the projection relationship between the orthographic map slice and the unmanned aerial vehicle image after the mask processing comprises:
acquiring an ortho map slice area corresponding to each mask in the ortho map slices after mask processing;
extracting a plurality of second characteristic points of the orthographic map slice area corresponding to each mask and a plurality of first characteristic points of the unmanned aerial vehicle image;
and respectively utilizing a plurality of second characteristic points and a plurality of first characteristic points of the orthomap slice area corresponding to each mask to calculate a homography matrix of the orthomap slice area corresponding to each mask and the unmanned aerial vehicle image.
The re-projecting the orthographic map slice after the mask processing to the coordinate system of the unmanned aerial vehicle image according to the projection relation, and forming the re-projected matched image by overlapping the projected orthographic map slice with the unmanned aerial vehicle image, includes:
the orthographic map slice area corresponding to each mask is re-projected into a coordinate system of the unmanned aerial vehicle image according to the orthographic map slice area corresponding to each mask and the homography matrix of the unmanned aerial vehicle image;
and overlapping the multiple groups of re-projection results to form the re-projected matched image.
Wherein, based on the unmanned aerial vehicle image acquisition corresponds orthomap section to and digital surface model section, include:
reading positioning information of the unmanned aerial vehicle image;
and cutting an orthographic map slice with the same image range as the unmanned aerial vehicle from an orthographic map according to the positioning information, and cutting a digital surface model slice with the same image range as the unmanned aerial vehicle from a digital surface model according to the positioning information.
Another technical solution adopted by the present application is to provide an illegal construction detection method, including:
acquiring a real-time unmanned aerial vehicle image, and acquiring a corresponding orthographic map slice and a digital surface model slice based on the unmanned aerial vehicle image;
acquiring a matched image of the unmanned aerial vehicle image, the orthographic map slice and the digital surface model slice, wherein the acquiring mode of the matched image is the image matching method;
acquiring difference information of buildings in the unmanned aerial vehicle image based on the matching image;
and judging whether the building is illegally built according to the difference information.
After the matching images of the unmanned aerial vehicle image, the ortho-map slice and the digital surface model slice are obtained, the violation detection method further comprises the following steps:
and filling the corresponding position in the matched image by using the building side facade position pixel in the unmanned aerial vehicle image.
Another technical solution adopted by the present application is to provide a terminal device, where the terminal device includes a memory and a processor coupled to the memory;
wherein the memory is configured to store program data and the processor is configured to execute the program data to implement the image matching method and/or the violation detection method as described above.
Another technical solution adopted by the present application is to provide a computer storage medium for storing program data, which when executed by a computer, is used to implement the image matching method and/or the violation detection method as described above.
The beneficial effect of this application is: the method comprises the steps that terminal equipment obtains an unmanned aerial vehicle image, and obtains a corresponding orthographic map slice and a digital surface model slice based on the unmanned aerial vehicle image; clustering all model points in the digital surface model slice according to height information to obtain a plurality of model point groups; grouping a plurality of model points to form a plurality of masks, and processing the shot map slices by using the masks; acquiring a projection relation between the orthographic map slice subjected to mask processing and the unmanned aerial vehicle image; and re-projecting the orthographic map slice subjected to the mask processing into a coordinate system of the unmanned aerial vehicle image according to the projection relation, and overlapping the orthographic map slice subjected to the projection with the unmanned aerial vehicle image to form a re-projected matched image. According to the image matching method, the masks formed by clustering the digital surface model slices act on the orthographic map slices, the blocks of the orthographic map slices under each mask are approximately positioned on the same plane, and the reprojection from the orthographic map slices to the unmanned aerial vehicle image is realized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of an image matching method provided herein;
FIG. 2 is a schematic diagram of a general flow of an image matching method provided herein;
fig. 3 is a schematic diagram of an embodiment of a drone image provided by the present application;
FIG. 4 is a schematic view of an embodiment of an orthographic map slice provided herein;
FIG. 5 is a flowchart illustrating specific sub-steps of step S14 of the image matching method shown in FIG. 1;
FIG. 6 is a schematic diagram illustrating an embodiment of an orthographic map reprojection method provided herein;
FIG. 7 is a schematic flowchart illustrating an embodiment of a violation detection method provided in the present application;
fig. 8 is a schematic structural diagram of an embodiment of a terminal device provided in the present application;
FIG. 9 is a schematic structural diagram of an embodiment of a computer storage medium provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The illegal building supervision is one of important works of urban management all the time, the traditional patrol of illegal buildings in a manual mode consumes time and labor, the supervision of illegal buildings is not timely and effective, meanwhile, due to the limitation of patrol angles, illegal and disorganized building and additional building behaviors at the top of a building cannot be timely found, and accordingly follow-up dismantling and rebuilding work of illegal buildings needs to be carried out by investing a large amount of time and energy. How to carry out illegal construction management and control and remediation for a long time, in time and at low cost, the illegal post-treatment is changed into in-service or even pre-prevention, the illegal extension and construction adding behavior is effectively prevented, the efficiency of a supervision department is improved, and the problem to be solved at present is urgently needed.
The unmanned aerial vehicle-based illegal construction inspection platform is expected to help municipal administration departments to solve the problem. The construction of the illegal inspection platform comprises two stages of base map construction and routine inspection. In the base map construction stage, an unmanned aerial vehicle is used to shoot a large number of images in a target area according to an operation mode with a high overlapping rate, the images are spliced into an ortho map and a high-precision Digital Surface Model (DSM) by using a photogrammetry technology, and the ortho map and the DSM are taken as base maps of the target area to be managed. In the routine inspection stage, the unmanned aerial vehicle is used for shooting the target area image according to the operation mode with low overlapping rate, and the unmanned aerial vehicle image and the ortho map are compared by adopting an image comparison technology so as to detect the illegal building.
However, the roof and the ground in the images are often not on the same plane, and the poses of the cameras of the two images are translated, so that the registration effect is influenced in the matching of the orthographic base map and the unmanned aerial vehicle image. Therefore, the image matching method is provided, and registration between the ortho map and the non-ortho unmanned aerial vehicle image can be achieved. The image matching method of the present application segments the orthographic map into a plurality of tiles, each tile being a single planar image, according to the height information in the DSM, so that the tiles can be registered with the non-orthographic drones using a plurality of homography matrices, respectively. DSM is a product generated simultaneously in the process of stitching orthographic maps.
Referring to fig. 1 and fig. 2, fig. 1 is a schematic flowchart of an embodiment of an image matching method provided by the present application, and fig. 2 is a schematic flowchart of a general flow of the image matching method provided by the present application.
The image matching method is applied to an image matching device, wherein the image matching device can be a server, and can also be a system in which the server and a terminal device are matched with each other. Accordingly, the image matching apparatus may include various parts, such as various units, sub-units, modules, and sub-modules, which are all disposed in the server, or disposed in the server and the terminal device, respectively.
Further, the server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules, for example, software or software modules for providing distributed servers, or as a single software or software module, and is not limited herein. In some possible implementations, the image matching method of the embodiments of the present application may be implemented by a processor calling computer readable instructions stored in a memory.
Specifically, as shown in fig. 1, the image matching method in the embodiment of the present application specifically includes the following steps:
step S11: acquiring unmanned aerial vehicle images, and acquiring corresponding ortho-map slices and digital surface model slices based on the unmanned aerial vehicle images.
In this application embodiment, the image matching device obtains the unmanned aerial vehicle image, and the main data source lies in that unmanned aerial vehicle shoots in real time above the target area. The image matching device further acquires a corresponding orthographic map slice and a corresponding digital surface model slice according to the unmanned aerial vehicle image. And the model points in the digital surface model slice correspond to the pixel points in the orthographic map slice one by one.
Specifically, a Digital Slope Model (DSM) refers to a ground elevation Model that includes the heights of surface buildings, bridges, trees, and the like. Compared with a Digital Elevation Model (DEM), the DEM only contains Elevation information of terrain and does not contain other surface information, and the DSM further contains the Elevation of surface information except the ground on the basis of the DEM. In some fields with requirements on the height of buildings, great attention is paid.
The height information which is lacked by the ortho-map is compensated by the height information provided by the digital surface model, and the registration effect and the reprojection effect of the ortho-map slice and the unmanned aerial vehicle image can be improved by means of the height information compensation of the digital surface model.
As shown in fig. 2, the image matching apparatus prestores an orthographic map and a DSM of a large area, or acquires the orthographic map and the DSM of the large area from a cloud server. Then, the image matching apparatus reads GPS information such as the latitude and longitude of a target area in an unmanned aerial vehicle image exif (Exchangeable image file format), cuts out a map slice corresponding to the unmanned aerial vehicle image range from the orthographic map based on the GPS information, and cuts out a DSM slice corresponding to the map slice from the DSM.
Specifically, fig. 3 is a schematic diagram of an embodiment of a drone image provided by the present application, and fig. 4 is a schematic diagram of an embodiment of an orthographic map slice provided by the present application. Comparing fig. 3 and fig. 4, it can be seen that the content of the orthographic map slice is substantially the same as the content of the drone image, and the GPS information of both is identical.
In other embodiments, the image matching device may expand the preset area based on the map area determined by the GPS information of the drone image, thereby ensuring that the orthographic map slice and the DSM slice can provide all the pixel information for drone image matching, taking into account camera distortion, calculation errors, and the like.
Step S12: and clustering all model points in the digital surface model slice according to the height information to obtain a plurality of model point groups.
In the embodiment of the application, the image matching device uses a preset clustering algorithm to cluster all model points in the DSM slice according to the height information of the model points, so that the height in the DSM slice is clustered into a plurality of model point groups. The model points in each model point grouping have the same or similar heights, i.e., the height distance is within a preset distance range, and thus, each model point grouping can be approximately equivalent to a plane.
Step S13: and forming a plurality of masks by grouping a plurality of model points, and processing the orthographic map slices by using the plurality of masks.
In the embodiment of the present application, as shown in fig. 2, the image matching apparatus sequentially converts the model points in each model point group into a mask according to the average heights of the model points in the group from low to high, that is, each model point group forms a mask.
Further, the image matching device sequentially applies a mask to the original orthographic map slices, and the orthographic map slice areas obtained after the mask are similar to the same plane. Specifically, because the DSM slices correspond to the pixel points in the ortho map slices one by one, each model point group can be similar to the same plane in the DSM slices, and the ortho map slice area under the mask can be similar to the same plane through the corresponding relation of the pixel points.
Step S14: and acquiring the projection relation between the orthographic map slice subjected to mask processing and the unmanned aerial vehicle image.
In the embodiment of the application, the image matching device extracts feature points of the masked orthographic map slice and the original unmanned aerial vehicle image respectively, matches the feature points, and calculates a projection relation according to the matched feature points, wherein the projection relation can be a homography matrix of the masked orthographic map slice and the unmanned aerial vehicle image.
Specifically, the image matching device extracts a plurality of first feature points of the unmanned aerial vehicle image and a plurality of second feature points of the orthographic map slice after mask processing; and matching the characteristic points of the plurality of first characteristic points and the plurality of second characteristic points, and calculating homography matrixes of the orthographic map slices subjected to mask processing and the unmanned aerial vehicle images according to matching results.
Further, since the orthographic map slice area obtained after each mask can be approximate to the same plane, the image matching device can also calculate the homography matrix of the orthographic map slice area of each mask and the unmanned aerial vehicle image, and therefore the multiple homography matrices are used, and the reprojection from the orthographic map slice to the non-orthographic unmanned aerial vehicle image is executed in blocks.
Referring specifically to fig. 5, fig. 5 is a flowchart illustrating specific sub-steps of step S14 of the image matching method shown in fig. 1.
Specifically, as shown in fig. 5, the image matching method in the embodiment of the present application specifically includes the following steps:
step S141: and acquiring an ortho map slice area corresponding to each mask in the ortho map slices after mask processing.
In the embodiment of the present application, the ortho map slice region corresponding to each mask in the ortho map slice can be similar to the same plane, and the ortho map slice region is therefore the minimum unit for the re-projection.
Step S142: and extracting a plurality of second characteristic points of the orthomap slice area corresponding to each mask and a plurality of first characteristic points of the unmanned aerial vehicle image.
Step S143: and respectively utilizing a plurality of second characteristic points and a plurality of first characteristic points of the orthomap slice area corresponding to each mask to calculate a homography matrix of the orthomap slice area corresponding to each mask and the unmanned aerial vehicle image.
In the embodiment of the application, the image matching device calculates the mapping relationship between the ortho map slice region and the unmanned aerial vehicle image, namely the homography matrix, by using a plurality of successfully matched feature points according to the feature point matching result between each ortho map slice region and the unmanned aerial vehicle image.
Among them, a homographic matrix (homographic matrix) is equivalent to a matrix used in the perspective transformation. The perspective transformation describes the mapping relationship between two planes. It is understood that the homography is called because the relationship between two planes is deterministic and the transformation can only be represented by a unique matrix, hence the homography.
Step S15: and re-projecting the orthographic map slice subjected to the mask processing into a coordinate system of the unmanned aerial vehicle image according to the projection relation, and overlapping the orthographic map slice subjected to the projection with the unmanned aerial vehicle image to form a re-projected matched image.
In this embodiment of the application, the image matching apparatus re-projects the masked orthographic map slice or the original orthographic map slice to the coordinate system of the unmanned aerial vehicle image according to the homography matrix calculated in step S14. Since the homography matrices of the plurality of orthomap slice regions and the unmanned aerial vehicle image are calculated in step S14, the image matching apparatus may re-project the masked orthomap slices into the coordinate system of the unmanned aerial vehicle image in blocks by using the plurality of homography matrices. And the image matching device completes mask processing in all the ortho-map slice areas, and multiple times of re-projection and superposition are carried out to obtain a result of matching the original ortho-map slices to the unmanned aerial vehicle image, so that the re-projected matched image is obtained. Referring to fig. 6, fig. 6 is a schematic diagram of an embodiment of an orthographic map reprojection result provided in the present application.
Further, the image shown in fig. 3 is a non-orthometric image obtained by the inspection of the unmanned aerial vehicle, and as the flying height of the unmanned aerial vehicle is low, the perspective relation is obvious, the side elevation of the building can be seen; the heights of the roofs of the buildings are different and are not in the same plane, and the matching with an orthographic map cannot be realized by using a single homography matrix. Fig. 6 shows the result of matching an ortho-map slice to the drone image shown in fig. 3 according to the image matching method described in the present application.
In addition, since the orthographic map does not include information on the building side elevation, there is a pixel missing in the area corresponding to the rear side elevation projected. After the re-projection, the roof of the building on the map is aligned with the roof of the unmanned aerial vehicle image, and the requirement of subsequent illegal building detection based on image comparison is met. For the problem of missing pixels at the position of the rear vertical face of the projection, the pixels at the corresponding position of the unmanned aerial vehicle image can be used for filling, and the noise of the regions can be effectively filtered by an image comparison algorithm based on deep learning.
In the embodiment of the application, the image matching device acquires an unmanned aerial vehicle image, and acquires a corresponding ortho-map slice and a digital surface model slice based on the unmanned aerial vehicle image; clustering all model points in the digital surface model slice according to height information to obtain a plurality of model point groups; grouping a plurality of model points to form a plurality of masks, and processing the shot map slices by using the masks; acquiring a projection relation between the orthographic map slice subjected to mask processing and the unmanned aerial vehicle image; and re-projecting the orthographic map slice subjected to the mask processing into a coordinate system of the unmanned aerial vehicle image according to the projection relation, and overlapping the orthographic map slice subjected to the projection with the unmanned aerial vehicle image to form a re-projected matched image. According to the image matching method, the masks formed by clustering the digital surface model slices act on the orthographic map slices, the fact that the blocks of the orthographic map slices under each mask are approximately in the same plane is guaranteed, and the reprojection from the orthographic map slices to the unmanned aerial vehicle image is achieved.
Based on the image matching method of the above embodiment, the present application further provides a default detection method, and specifically refer to fig. 7, where fig. 7 is a schematic flowchart of an embodiment of the default detection method provided by the present application.
The illegal construction detection method is applied to an illegal construction detection device, wherein the illegal construction detection device can be a server or a system formed by the cooperation of the server and terminal equipment. Accordingly, each part, for example, each unit, sub-unit, module, and sub-module, included in the illegal building detection apparatus may be all disposed in the server, or may be disposed in the server and the terminal device, respectively.
Further, the server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster composed of multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules, for example, software or software modules for providing distributed servers, or as a single software or software module, and is not limited herein. In some possible implementations, the violation detection method of the embodiments of the present application may be implemented by a processor calling a computer readable instruction stored in a memory.
Specifically, as shown in fig. 7, the method for detecting violation according to the embodiment of the present application specifically includes the following steps:
step S21: acquiring real-time unmanned aerial vehicle images, and acquiring corresponding ortho-map slices and digital surface model slices based on the unmanned aerial vehicle images.
In the embodiment of the present application, the content of step S21 has already been described in detail in step S11 in the above embodiment, and is not described herein again.
Step S22: and acquiring a matching image of the unmanned aerial vehicle image, the orthographic map slice and the digital surface model slice.
Step S23: and acquiring difference information of buildings in the unmanned aerial vehicle image based on the matching image.
In this embodiment of the application, the illegal building detection device may analyze the change difference information of the building in the unmanned aerial vehicle image by using the orthographic map reprojection result shown in fig. 6.
Step S24: and judging whether the building is illegally built according to the difference information.
In the embodiment of the application, the illegal construction detection device depends on the building difference and the depth information of the multiple image reprojection results, the position with the abnormal difference can be estimated, and the illegal construction department efficiency can be greatly improved and the labor cost can be reduced by combining the general survey of personnel.
The image matching method and the illegal building detection method can directly carry out registration and comparison on the non-orthometric unmanned aerial vehicle image and the orthometric map, do not need to splice an orthometric large map every time of inspection, have no requirement on the overlapping rate of the images of inspection, have higher operation efficiency and can meet the inspection requirement of a large scene; in addition, the image matching method and the illegal construction detection method do not need the unmanned aerial vehicle to shoot the orthoimage, and the common low-altitude flying unmanned aerial vehicle can also be used for operation, so that the technical threshold of illegal construction detection is reduced, and the universality of illegal construction detection is improved.
The above embodiments are only one of the common cases of the present application and do not limit the technical scope of the present application, so that any minor modifications, equivalent changes or modifications made to the above contents according to the essence of the present application still fall within the technical scope of the present application.
Continuing to refer to fig. 8, fig. 8 is a schematic structural diagram of an embodiment of a terminal device provided in the present application. The terminal device 500 of the embodiment of the present application includes a processor 51, a memory 52, an input-output device 53, and a bus 54.
The processor 51, the memory 52, and the input/output device 53 are respectively connected to the bus 54, the memory 52 stores program data, and the processor 51 is configured to execute the program data to implement the image matching method and/or the violation detection method according to the above embodiments.
In the embodiment of the present application, the processor 51 may also be referred to as a CPU (Central Processing Unit). The processor 51 may be an integrated circuit chip having signal processing capabilities. The processor 51 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor 51 may be any conventional processor or the like.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an embodiment of a computer storage medium provided in the present application, the computer storage medium 600 stores program data 61, and the program data 61 is used to implement the image matching method and/or the violation detection method of the above embodiment when executed by a processor.
Embodiments of the present application may be implemented in software functional units and may be stored in a computer readable storage medium when sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, which is defined by the claims and the accompanying drawings, and the equivalents and equivalent structures and equivalent processes used in the present application and the accompanying drawings are also directly or indirectly applicable to other related technical fields and are all included in the scope of the present application.

Claims (10)

1. An image matching method, characterized in that the image matching method comprises:
acquiring an unmanned aerial vehicle image, and acquiring a corresponding orthographic map slice and a digital surface model slice based on the unmanned aerial vehicle image;
clustering all model points in the digital surface model slice according to height information to obtain a plurality of model point groups;
grouping a plurality of model points to form a plurality of masks, and processing the orthographic map slices by using the masks, wherein an orthographic map slice area under each mask comprises pixel points on the same plane;
acquiring a projection relation between the orthographic map slices subjected to mask processing and the unmanned aerial vehicle image, wherein the projection relation comprises a projection relation between an orthographic map slice area under each mask and the unmanned aerial vehicle image;
and re-projecting the orthographic map slice after the mask processing to a coordinate system of the unmanned aerial vehicle image according to the projection relation, and overlapping the orthographic map slice after projection and the unmanned aerial vehicle image to form a re-projected matched image.
2. The image matching method according to claim 1,
and the model points in the digital surface model slice correspond to the pixel points in the orthographic map slice one by one.
3. The image matching method according to claim 1,
the projection relationship comprises a homography matrix;
the obtaining of the projection relationship between the orthographic map slice and the unmanned aerial vehicle image after mask processing includes:
extracting a plurality of first feature points of the unmanned aerial vehicle image and a plurality of second feature points of the orthographic map slice after mask processing;
and matching the characteristic points of the plurality of first characteristic points and the plurality of second characteristic points, and calculating the homography matrix of the orthographic map slice and the unmanned aerial vehicle image after mask processing according to the matching result.
4. The image matching method according to claim 3,
the obtaining of the projection relationship between the orthographic map slice and the unmanned aerial vehicle image after mask processing includes:
acquiring an ortho map slice area corresponding to each mask in the ortho map slices after mask processing;
extracting a plurality of second characteristic points of the orthographic map slice area corresponding to each mask and a plurality of first characteristic points of the unmanned aerial vehicle image;
and respectively utilizing a plurality of second characteristic points and a plurality of first characteristic points of the orthomap slice area corresponding to each mask to calculate a homography matrix of the orthomap slice area corresponding to each mask and the unmanned aerial vehicle image.
5. The image matching method according to claim 4,
the re-projecting the orthographic map slice after the mask processing to the coordinate system of the unmanned aerial vehicle image according to the projection relation, and forming the re-projected matched image by overlapping the projected orthographic map slice and the unmanned aerial vehicle image, comprises:
the orthographic map slice area corresponding to each mask is re-projected into a coordinate system of the unmanned aerial vehicle image according to the orthographic map slice area corresponding to each mask and the homography matrix of the unmanned aerial vehicle image;
and overlapping the multiple groups of re-projection results to form the re-projected matched image.
6. The image matching method according to claim 1,
based on unmanned aerial vehicle image acquisition corresponds orthomap section to and digital surface model section, include:
reading positioning information of the unmanned aerial vehicle image;
and cutting an orthographic map slice with the same image range as the unmanned aerial vehicle from an orthographic map according to the positioning information, and cutting a digital surface model slice with the same image range as the unmanned aerial vehicle from a digital surface model according to the positioning information.
7. An illegal establishment detection method, characterized in that the illegal establishment detection method comprises:
acquiring a real-time unmanned aerial vehicle image, and acquiring a corresponding orthographic map slice and a digital surface model slice based on the unmanned aerial vehicle image;
acquiring a matching image of the unmanned aerial vehicle image and the orthographic map slice and the digital surface model slice, wherein the matching image is acquired according to the image matching method of any one of claims 1 to 6;
acquiring difference information of buildings in the unmanned aerial vehicle image based on the matching image;
and judging whether the building is illegally built according to the difference information.
8. The violation detection method of claim 7,
after the matching images of the unmanned aerial vehicle image, the ortho-map slice and the digital surface model slice are obtained, the method for detecting the illegal construction further comprises the following steps:
and filling the corresponding position in the matched image by using the building side elevation position pixel in the unmanned aerial vehicle image.
9. A terminal device, comprising a memory and a processor coupled to the memory;
wherein the memory is adapted to store program data, and the processor is adapted to execute the program data to implement the image matching method of any of claims 1 to 6, and/or the violation detection method of claim 7 or 8.
10. A computer storage medium for storing program data which, when executed by a computer, is adapted to implement the image matching method of any one of claims 1 to 6 and/or the violation detection method of claim 7 or 8.
CN202211377096.4A 2022-11-04 2022-11-04 Image matching method, illicit detection method, terminal device, and storage medium Active CN115439672B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211377096.4A CN115439672B (en) 2022-11-04 2022-11-04 Image matching method, illicit detection method, terminal device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211377096.4A CN115439672B (en) 2022-11-04 2022-11-04 Image matching method, illicit detection method, terminal device, and storage medium

Publications (2)

Publication Number Publication Date
CN115439672A CN115439672A (en) 2022-12-06
CN115439672B true CN115439672B (en) 2023-02-24

Family

ID=84252489

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211377096.4A Active CN115439672B (en) 2022-11-04 2022-11-04 Image matching method, illicit detection method, terminal device, and storage medium

Country Status (1)

Country Link
CN (1) CN115439672B (en)

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008075061A2 (en) * 2006-12-20 2008-06-26 Mitsubishi Electric Information Technology Centre Europe B.V. Multiple image registration apparatus and method
CN105956058B (en) * 2016-04-27 2019-05-21 东南大学 A kind of variation land used rapid discovery method using unmanned aerial vehicle remote sensing images
CN109632585A (en) * 2018-06-25 2019-04-16 长沙理工大学 A method of river bed surface erratic boulder partial size and distribution are measured based on UAV
CN110689563A (en) * 2019-09-27 2020-01-14 佛山科学技术学院 Data processing method for extracting illegal building information in remote sensing image
CN110866531A (en) * 2019-10-15 2020-03-06 深圳新视达视讯工程有限公司 Building feature extraction method and system based on three-dimensional modeling and storage medium
CN111401345B (en) * 2020-06-04 2020-08-28 常州市新翼空间信息科技有限公司 DSM automatic comparison system based on aerial photography measurement
CN111854699A (en) * 2020-07-03 2020-10-30 武汉大学 Unmanned aerial vehicle-based monitoring method for aerial survey river channel bank collapse process
CN112261396B (en) * 2020-10-26 2022-02-25 成都极米科技股份有限公司 Projection method, projection device, projection equipment and computer readable storage medium
CN113313006B (en) * 2021-05-25 2023-08-22 哈工智慧(武汉)科技有限公司 Urban illegal building supervision method, system and storage medium based on unmanned aerial vehicle
CN113362439A (en) * 2021-06-11 2021-09-07 广西东方道迩科技有限公司 Method for fusing digital surface model data based on real projective image
CN114565863B (en) * 2022-02-18 2023-03-24 广州市城市规划勘测设计研究院 Real-time generation method, device, medium and equipment for orthophoto of unmanned aerial vehicle image
CN115115785A (en) * 2022-07-20 2022-09-27 东南大学 Multi-machine cooperative three-dimensional modeling system and method for search and rescue in field mountain and forest environment

Also Published As

Publication number Publication date
CN115439672A (en) 2022-12-06

Similar Documents

Publication Publication Date Title
CN115424155B (en) Illegal construction detection method, illegal construction detection device and computer storage medium
AU2005271385B2 (en) Method of preparing a composite image with non-uniform resolution
AU2019302552B2 (en) Synthetic image generation from 3D-point cloud
JP5366190B2 (en) BUILDING CHANGE DETECTION DEVICE, BUILDING CHANGE DETECTION METHOD, AND PROGRAM
CN110926475A (en) Unmanned aerial vehicle waypoint generation method and device and electronic equipment
CN109325913A (en) Unmanned plane image split-joint method and device
CN111368615A (en) Violation building early warning method and device and electronic equipment
CN114565863A (en) Real-time generation method, device, medium and equipment for orthophoto of unmanned aerial vehicle image
CN115376109A (en) Obstacle detection method, obstacle detection device, and storage medium
JP2006350553A (en) Corresponding point retrieval method, mutual location method, three-dimensional image measurement method, corresponding point retrieval device, mutual location device, three-dimensional image measurement device, corresponding point retrieval program and computer-readable recording medium with its program recorded
CN113450390B (en) Target tracking method and device based on road side camera and electronic equipment
CN110910432A (en) Remote sensing image matching method and device, electronic equipment and readable storage medium
CN115439672B (en) Image matching method, illicit detection method, terminal device, and storage medium
JP6146731B2 (en) Coordinate correction apparatus, coordinate correction program, and coordinate correction method
CN111582296B (en) Remote sensing image comprehensive matching method and device, electronic equipment and storage medium
CN116823966A (en) Internal reference calibration method and device for camera, computer equipment and storage medium
CN115797668A (en) Image matching method, illicit detection method, terminal device, and storage medium
CN113344002B (en) Target coordinate duplication eliminating method and system, electronic equipment and readable storage medium
CN111508067B (en) Lightweight indoor modeling method based on vertical plane and vertical line
CN110827243A (en) Method and device for detecting abnormity of coverage area of grid beam
CN117392634B (en) Lane line acquisition method and device, storage medium and electronic device
EP3602479B1 (en) Motion imagery corner point sequencer
CN115797578A (en) Processing method and device for high-precision map
CN117333686A (en) Target positioning method, device, equipment and medium
KR20220122380A (en) Method and System for Verifying Outside Manure Pile Volume based on Unmanned Aerial Vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant